2026-03-28 00:00:07.954030 | Job console starting 2026-03-28 00:00:08.022761 | Updating git repos 2026-03-28 00:00:08.238066 | Cloning repos into workspace 2026-03-28 00:00:08.497923 | Restoring repo states 2026-03-28 00:00:08.519990 | Merging changes 2026-03-28 00:00:08.520009 | Checking out repos 2026-03-28 00:00:08.933660 | Preparing playbooks 2026-03-28 00:00:09.924190 | Running Ansible setup 2026-03-28 00:00:16.767687 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-28 00:00:18.103113 | 2026-03-28 00:00:18.103308 | PLAY [Base pre] 2026-03-28 00:00:18.121600 | 2026-03-28 00:00:18.121758 | TASK [Setup log path fact] 2026-03-28 00:00:18.143622 | orchestrator | ok 2026-03-28 00:00:18.165118 | 2026-03-28 00:00:18.165315 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-28 00:00:18.196147 | orchestrator | ok 2026-03-28 00:00:18.211353 | 2026-03-28 00:00:18.211493 | TASK [emit-job-header : Print job information] 2026-03-28 00:00:18.262243 | # Job Information 2026-03-28 00:00:18.262473 | Ansible Version: 2.16.14 2026-03-28 00:00:18.262507 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-28 00:00:18.262544 | Pipeline: periodic-midnight 2026-03-28 00:00:18.262567 | Executor: 521e9411259a 2026-03-28 00:00:18.262584 | Triggered by: https://github.com/osism/testbed 2026-03-28 00:00:18.262603 | Event ID: 7d11dc1fbab545418744be3ecae96668 2026-03-28 00:00:18.270270 | 2026-03-28 00:00:18.270396 | LOOP [emit-job-header : Print node information] 2026-03-28 00:00:18.368764 | orchestrator | ok: 2026-03-28 00:00:18.369004 | orchestrator | # Node Information 2026-03-28 00:00:18.369037 | orchestrator | Inventory Hostname: orchestrator 2026-03-28 00:00:18.369059 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-28 00:00:18.369078 | orchestrator | Username: zuul-testbed02 2026-03-28 00:00:18.369095 | orchestrator | Distro: Debian 12.13 2026-03-28 00:00:18.369115 | orchestrator | Provider: static-testbed 2026-03-28 00:00:18.369133 | orchestrator | Region: 2026-03-28 00:00:18.369150 | orchestrator | Label: testbed-orchestrator 2026-03-28 00:00:18.369166 | orchestrator | Product Name: OpenStack Nova 2026-03-28 00:00:18.369194 | orchestrator | Interface IP: 81.163.193.140 2026-03-28 00:00:18.388534 | 2026-03-28 00:00:18.388638 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-28 00:00:19.599180 | orchestrator -> localhost | changed 2026-03-28 00:00:19.606004 | 2026-03-28 00:00:19.606097 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-28 00:00:21.647156 | orchestrator -> localhost | changed 2026-03-28 00:00:21.658409 | 2026-03-28 00:00:21.658503 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-28 00:00:22.155351 | orchestrator -> localhost | ok 2026-03-28 00:00:22.160897 | 2026-03-28 00:00:22.161029 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-28 00:00:22.199587 | orchestrator | ok 2026-03-28 00:00:22.228434 | orchestrator | included: /var/lib/zuul/builds/e15732348dc84737bc9145d0d2f89ba4/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-28 00:00:22.234828 | 2026-03-28 00:00:22.246966 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-28 00:00:28.440089 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-28 00:00:28.441236 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/e15732348dc84737bc9145d0d2f89ba4/work/e15732348dc84737bc9145d0d2f89ba4_id_rsa 2026-03-28 00:00:28.441298 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/e15732348dc84737bc9145d0d2f89ba4/work/e15732348dc84737bc9145d0d2f89ba4_id_rsa.pub 2026-03-28 00:00:28.441323 | orchestrator -> localhost | The key fingerprint is: 2026-03-28 00:00:28.441342 | orchestrator -> localhost | SHA256:lpZazWPSe2WrdTx40UEZlA95m+gmtu68u1fk6bNlxmc zuul-build-sshkey 2026-03-28 00:00:28.441360 | orchestrator -> localhost | The key's randomart image is: 2026-03-28 00:00:28.441386 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-28 00:00:28.441404 | orchestrator -> localhost | | .=+| 2026-03-28 00:00:28.441422 | orchestrator -> localhost | | +o.| 2026-03-28 00:00:28.441438 | orchestrator -> localhost | | .++| 2026-03-28 00:00:28.441454 | orchestrator -> localhost | | * . ++| 2026-03-28 00:00:28.441471 | orchestrator -> localhost | | S * . =.o| 2026-03-28 00:00:28.441493 | orchestrator -> localhost | | = o = =.B.| 2026-03-28 00:00:28.441510 | orchestrator -> localhost | | . o =.=oE| 2026-03-28 00:00:28.441526 | orchestrator -> localhost | | .o +.B+| 2026-03-28 00:00:28.441543 | orchestrator -> localhost | | oB* .o| 2026-03-28 00:00:28.441560 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-28 00:00:28.441610 | orchestrator -> localhost | ok: Runtime: 0:00:05.280231 2026-03-28 00:00:28.447870 | 2026-03-28 00:00:28.447941 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-28 00:00:28.476229 | orchestrator | ok 2026-03-28 00:00:28.491001 | orchestrator | included: /var/lib/zuul/builds/e15732348dc84737bc9145d0d2f89ba4/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-28 00:00:28.505420 | 2026-03-28 00:00:28.505514 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-28 00:00:28.542709 | orchestrator | skipping: Conditional result was False 2026-03-28 00:00:28.549354 | 2026-03-28 00:00:28.549456 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-28 00:00:29.295543 | orchestrator | changed 2026-03-28 00:00:29.303020 | 2026-03-28 00:00:29.303111 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-28 00:00:29.580128 | orchestrator | ok 2026-03-28 00:00:29.586603 | 2026-03-28 00:00:29.586690 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-28 00:00:30.113826 | orchestrator | ok 2026-03-28 00:00:30.118616 | 2026-03-28 00:00:30.118696 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-28 00:00:30.635152 | orchestrator | ok 2026-03-28 00:00:30.639990 | 2026-03-28 00:00:30.640073 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-28 00:00:30.665399 | orchestrator | skipping: Conditional result was False 2026-03-28 00:00:30.671870 | 2026-03-28 00:00:30.671951 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-28 00:00:31.890738 | orchestrator -> localhost | changed 2026-03-28 00:00:31.902183 | 2026-03-28 00:00:31.902303 | TASK [add-build-sshkey : Add back temp key] 2026-03-28 00:00:32.585706 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/e15732348dc84737bc9145d0d2f89ba4/work/e15732348dc84737bc9145d0d2f89ba4_id_rsa (zuul-build-sshkey) 2026-03-28 00:00:32.585890 | orchestrator -> localhost | ok: Runtime: 0:00:00.019820 2026-03-28 00:00:32.598751 | 2026-03-28 00:00:32.598865 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-28 00:00:33.290052 | orchestrator | ok 2026-03-28 00:00:33.297982 | 2026-03-28 00:00:33.298089 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-28 00:00:33.331649 | orchestrator | skipping: Conditional result was False 2026-03-28 00:00:33.557187 | 2026-03-28 00:00:33.557304 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-28 00:00:34.766707 | orchestrator | ok 2026-03-28 00:00:34.789297 | 2026-03-28 00:00:34.789405 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-28 00:00:34.848948 | orchestrator | ok 2026-03-28 00:00:34.864555 | 2026-03-28 00:00:34.864656 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-28 00:00:35.628571 | orchestrator -> localhost | ok 2026-03-28 00:00:35.634853 | 2026-03-28 00:00:35.634943 | TASK [validate-host : Collect information about the host] 2026-03-28 00:00:37.557880 | orchestrator | ok 2026-03-28 00:00:37.600445 | 2026-03-28 00:00:37.600564 | TASK [validate-host : Sanitize hostname] 2026-03-28 00:00:37.783243 | orchestrator | ok 2026-03-28 00:00:37.787621 | 2026-03-28 00:00:37.787699 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-28 00:00:39.095837 | orchestrator -> localhost | changed 2026-03-28 00:00:39.101298 | 2026-03-28 00:00:39.101383 | TASK [validate-host : Collect information about zuul worker] 2026-03-28 00:00:39.831341 | orchestrator | ok 2026-03-28 00:00:39.835973 | 2026-03-28 00:00:39.836063 | TASK [validate-host : Write out all zuul information for each host] 2026-03-28 00:00:41.339514 | orchestrator -> localhost | changed 2026-03-28 00:00:41.348713 | 2026-03-28 00:00:41.348799 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-28 00:00:41.672319 | orchestrator | ok 2026-03-28 00:00:41.677030 | 2026-03-28 00:00:41.677108 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-28 00:02:00.701406 | orchestrator | changed: 2026-03-28 00:02:00.701631 | orchestrator | .d..t...... src/ 2026-03-28 00:02:00.701667 | orchestrator | .d..t...... src/github.com/ 2026-03-28 00:02:00.701692 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-28 00:02:00.701714 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-28 00:02:00.701735 | orchestrator | RedHat.yml 2026-03-28 00:02:00.716770 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-28 00:02:00.716792 | orchestrator | RedHat.yml 2026-03-28 00:02:00.716847 | orchestrator | = 1.53.0"... 2026-03-28 00:02:13.709815 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-28 00:02:13.848424 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-28 00:02:14.340952 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-28 00:02:14.406009 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-28 00:02:15.227435 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-28 00:02:15.291161 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-28 00:02:15.841219 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-28 00:02:15.841287 | orchestrator | 2026-03-28 00:02:15.841294 | orchestrator | Providers are signed by their developers. 2026-03-28 00:02:15.841299 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-28 00:02:15.841304 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-28 00:02:15.841320 | orchestrator | 2026-03-28 00:02:15.841325 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-28 00:02:15.841329 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-28 00:02:15.841340 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-28 00:02:15.841345 | orchestrator | you run "tofu init" in the future. 2026-03-28 00:02:15.841609 | orchestrator | 2026-03-28 00:02:15.841618 | orchestrator | OpenTofu has been successfully initialized! 2026-03-28 00:02:15.841640 | orchestrator | 2026-03-28 00:02:15.841645 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-28 00:02:15.841649 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-28 00:02:15.841653 | orchestrator | should now work. 2026-03-28 00:02:15.841657 | orchestrator | 2026-03-28 00:02:15.841671 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-28 00:02:15.841675 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-28 00:02:15.841679 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-28 00:02:16.015453 | orchestrator | Created and switched to workspace "ci"! 2026-03-28 00:02:16.015569 | orchestrator | 2026-03-28 00:02:16.015583 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-28 00:02:16.015594 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-28 00:02:16.015602 | orchestrator | for this configuration. 2026-03-28 00:02:16.178380 | orchestrator | ci.auto.tfvars 2026-03-28 00:02:16.529514 | orchestrator | default_custom.tf 2026-03-28 00:02:21.280948 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-28 00:02:21.866970 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-28 00:02:22.115296 | orchestrator | 2026-03-28 00:02:22.115363 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-28 00:02:22.115409 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-28 00:02:22.115437 | orchestrator | + create 2026-03-28 00:02:22.115453 | orchestrator | <= read (data resources) 2026-03-28 00:02:22.115466 | orchestrator | 2026-03-28 00:02:22.115471 | orchestrator | OpenTofu will perform the following actions: 2026-03-28 00:02:22.115573 | orchestrator | 2026-03-28 00:02:22.115587 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-28 00:02:22.115591 | orchestrator | # (config refers to values not yet known) 2026-03-28 00:02:22.115596 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-28 00:02:22.115600 | orchestrator | + checksum = (known after apply) 2026-03-28 00:02:22.115605 | orchestrator | + created_at = (known after apply) 2026-03-28 00:02:22.115609 | orchestrator | + file = (known after apply) 2026-03-28 00:02:22.115613 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.115633 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.115637 | orchestrator | + min_disk_gb = (known after apply) 2026-03-28 00:02:22.115642 | orchestrator | + min_ram_mb = (known after apply) 2026-03-28 00:02:22.115646 | orchestrator | + most_recent = true 2026-03-28 00:02:22.115650 | orchestrator | + name = (known after apply) 2026-03-28 00:02:22.115654 | orchestrator | + protected = (known after apply) 2026-03-28 00:02:22.115658 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.115664 | orchestrator | + schema = (known after apply) 2026-03-28 00:02:22.115668 | orchestrator | + size_bytes = (known after apply) 2026-03-28 00:02:22.115672 | orchestrator | + tags = (known after apply) 2026-03-28 00:02:22.115676 | orchestrator | + updated_at = (known after apply) 2026-03-28 00:02:22.115680 | orchestrator | } 2026-03-28 00:02:22.115758 | orchestrator | 2026-03-28 00:02:22.115770 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-28 00:02:22.115775 | orchestrator | # (config refers to values not yet known) 2026-03-28 00:02:22.115779 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-28 00:02:22.115783 | orchestrator | + checksum = (known after apply) 2026-03-28 00:02:22.115787 | orchestrator | + created_at = (known after apply) 2026-03-28 00:02:22.115791 | orchestrator | + file = (known after apply) 2026-03-28 00:02:22.115795 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.115799 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.115803 | orchestrator | + min_disk_gb = (known after apply) 2026-03-28 00:02:22.115806 | orchestrator | + min_ram_mb = (known after apply) 2026-03-28 00:02:22.115810 | orchestrator | + most_recent = true 2026-03-28 00:02:22.115814 | orchestrator | + name = (known after apply) 2026-03-28 00:02:22.115818 | orchestrator | + protected = (known after apply) 2026-03-28 00:02:22.115822 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.115826 | orchestrator | + schema = (known after apply) 2026-03-28 00:02:22.115829 | orchestrator | + size_bytes = (known after apply) 2026-03-28 00:02:22.115833 | orchestrator | + tags = (known after apply) 2026-03-28 00:02:22.115837 | orchestrator | + updated_at = (known after apply) 2026-03-28 00:02:22.115841 | orchestrator | } 2026-03-28 00:02:22.115912 | orchestrator | 2026-03-28 00:02:22.115923 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-28 00:02:22.115928 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-28 00:02:22.115932 | orchestrator | + content = (known after apply) 2026-03-28 00:02:22.115937 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 00:02:22.115940 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 00:02:22.115944 | orchestrator | + content_md5 = (known after apply) 2026-03-28 00:02:22.115948 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 00:02:22.115952 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 00:02:22.115955 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 00:02:22.115959 | orchestrator | + directory_permission = "0777" 2026-03-28 00:02:22.115963 | orchestrator | + file_permission = "0644" 2026-03-28 00:02:22.115967 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-28 00:02:22.115970 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.115974 | orchestrator | } 2026-03-28 00:02:22.116043 | orchestrator | 2026-03-28 00:02:22.116054 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-28 00:02:22.116059 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-28 00:02:22.116062 | orchestrator | + content = (known after apply) 2026-03-28 00:02:22.116066 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 00:02:22.116070 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 00:02:22.116074 | orchestrator | + content_md5 = (known after apply) 2026-03-28 00:02:22.116078 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 00:02:22.116081 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 00:02:22.116085 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 00:02:22.116089 | orchestrator | + directory_permission = "0777" 2026-03-28 00:02:22.116093 | orchestrator | + file_permission = "0644" 2026-03-28 00:02:22.116101 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-28 00:02:22.116105 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.116109 | orchestrator | } 2026-03-28 00:02:22.116173 | orchestrator | 2026-03-28 00:02:22.116189 | orchestrator | # local_file.inventory will be created 2026-03-28 00:02:22.116194 | orchestrator | + resource "local_file" "inventory" { 2026-03-28 00:02:22.116198 | orchestrator | + content = (known after apply) 2026-03-28 00:02:22.116202 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 00:02:22.116205 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 00:02:22.116209 | orchestrator | + content_md5 = (known after apply) 2026-03-28 00:02:22.116213 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 00:02:22.116217 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 00:02:22.116221 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 00:02:22.116225 | orchestrator | + directory_permission = "0777" 2026-03-28 00:02:22.116228 | orchestrator | + file_permission = "0644" 2026-03-28 00:02:22.116232 | orchestrator | + filename = "inventory.ci" 2026-03-28 00:02:22.116236 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.116240 | orchestrator | } 2026-03-28 00:02:22.116308 | orchestrator | 2026-03-28 00:02:22.116320 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-28 00:02:22.116324 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-28 00:02:22.116328 | orchestrator | + content = (sensitive value) 2026-03-28 00:02:22.116332 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 00:02:22.116336 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 00:02:22.116340 | orchestrator | + content_md5 = (known after apply) 2026-03-28 00:02:22.116343 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 00:02:22.116347 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 00:02:22.116351 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 00:02:22.116354 | orchestrator | + directory_permission = "0700" 2026-03-28 00:02:22.116358 | orchestrator | + file_permission = "0600" 2026-03-28 00:02:22.116362 | orchestrator | + filename = ".id_rsa.ci" 2026-03-28 00:02:22.116366 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.116387 | orchestrator | } 2026-03-28 00:02:22.116409 | orchestrator | 2026-03-28 00:02:22.116420 | orchestrator | # null_resource.node_semaphore will be created 2026-03-28 00:02:22.116424 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-28 00:02:22.116428 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.116432 | orchestrator | } 2026-03-28 00:02:22.116497 | orchestrator | 2026-03-28 00:02:22.116509 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-28 00:02:22.116513 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-28 00:02:22.116517 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.116521 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.116525 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.116528 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.116532 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.116536 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-28 00:02:22.116540 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.116544 | orchestrator | + size = 80 2026-03-28 00:02:22.116547 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.116551 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.116555 | orchestrator | } 2026-03-28 00:02:22.116614 | orchestrator | 2026-03-28 00:02:22.116624 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-28 00:02:22.116629 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:22.116633 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.116637 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.116640 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.116648 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.116652 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.116655 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-28 00:02:22.116659 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.116663 | orchestrator | + size = 80 2026-03-28 00:02:22.116667 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.116670 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.116674 | orchestrator | } 2026-03-28 00:02:22.116731 | orchestrator | 2026-03-28 00:02:22.116742 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-28 00:02:22.116746 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:22.116750 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.116754 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.116758 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.116762 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.116765 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.116769 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-28 00:02:22.116773 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.116777 | orchestrator | + size = 80 2026-03-28 00:02:22.116780 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.116784 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.116788 | orchestrator | } 2026-03-28 00:02:22.116844 | orchestrator | 2026-03-28 00:02:22.116855 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-28 00:02:22.116859 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:22.116863 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.116867 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.116871 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.116874 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.116878 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.116882 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-28 00:02:22.116886 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.116889 | orchestrator | + size = 80 2026-03-28 00:02:22.116893 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.116897 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.116902 | orchestrator | } 2026-03-28 00:02:22.116992 | orchestrator | 2026-03-28 00:02:22.117011 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-28 00:02:22.117018 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:22.117024 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.117030 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.117036 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.117043 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.117047 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.117055 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-28 00:02:22.117059 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.117063 | orchestrator | + size = 80 2026-03-28 00:02:22.117067 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.117071 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.117074 | orchestrator | } 2026-03-28 00:02:22.117140 | orchestrator | 2026-03-28 00:02:22.117151 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-28 00:02:22.117156 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:22.117160 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.117164 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.117167 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.117176 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.117180 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.117184 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-28 00:02:22.117187 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.117191 | orchestrator | + size = 80 2026-03-28 00:02:22.117195 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.117199 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.117203 | orchestrator | } 2026-03-28 00:02:22.117265 | orchestrator | 2026-03-28 00:02:22.117277 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-28 00:02:22.117281 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:22.117285 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.117289 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.117293 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.117296 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.117300 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.117304 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-28 00:02:22.117308 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.117312 | orchestrator | + size = 80 2026-03-28 00:02:22.117315 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.117319 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.117323 | orchestrator | } 2026-03-28 00:02:22.117407 | orchestrator | 2026-03-28 00:02:22.117420 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-28 00:02:22.117425 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:22.117429 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.117433 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.117437 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.117441 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.117445 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-28 00:02:22.117449 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.117453 | orchestrator | + size = 20 2026-03-28 00:02:22.117457 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.117460 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.117464 | orchestrator | } 2026-03-28 00:02:22.117522 | orchestrator | 2026-03-28 00:02:22.117533 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-28 00:02:22.117537 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:22.117541 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.117545 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.117549 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.117552 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.117556 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-28 00:02:22.117560 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.117564 | orchestrator | + size = 20 2026-03-28 00:02:22.117568 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.117571 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.117575 | orchestrator | } 2026-03-28 00:02:22.117632 | orchestrator | 2026-03-28 00:02:22.117643 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-28 00:02:22.117648 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:22.117651 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.117655 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.117659 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.117663 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.117667 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-28 00:02:22.117670 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.117682 | orchestrator | + size = 20 2026-03-28 00:02:22.117686 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.117689 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.117693 | orchestrator | } 2026-03-28 00:02:22.117748 | orchestrator | 2026-03-28 00:02:22.117759 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-28 00:02:22.117764 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:22.117768 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.117771 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.117775 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.117779 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.117783 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-28 00:02:22.117787 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.117791 | orchestrator | + size = 20 2026-03-28 00:02:22.117794 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.117798 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.117802 | orchestrator | } 2026-03-28 00:02:22.117856 | orchestrator | 2026-03-28 00:02:22.117868 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-28 00:02:22.117872 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:22.117876 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.117880 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.117883 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.117887 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.117891 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-28 00:02:22.117895 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.117902 | orchestrator | + size = 20 2026-03-28 00:02:22.117906 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.117910 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.117914 | orchestrator | } 2026-03-28 00:02:22.117971 | orchestrator | 2026-03-28 00:02:22.117982 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-28 00:02:22.117987 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:22.117991 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.117995 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.117998 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.118002 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.118006 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-28 00:02:22.118010 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.118041 | orchestrator | + size = 20 2026-03-28 00:02:22.118047 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.118053 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.118059 | orchestrator | } 2026-03-28 00:02:22.118148 | orchestrator | 2026-03-28 00:02:22.118165 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-28 00:02:22.118171 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:22.118177 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.118182 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.118200 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.118207 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.118214 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-28 00:02:22.118220 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.118225 | orchestrator | + size = 20 2026-03-28 00:02:22.118231 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.118237 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.118243 | orchestrator | } 2026-03-28 00:02:22.118354 | orchestrator | 2026-03-28 00:02:22.118390 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-28 00:02:22.118398 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:22.118411 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.118417 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.118423 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.118430 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.118435 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-28 00:02:22.118441 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.118447 | orchestrator | + size = 20 2026-03-28 00:02:22.118453 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.118459 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.118465 | orchestrator | } 2026-03-28 00:02:22.118569 | orchestrator | 2026-03-28 00:02:22.118589 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-28 00:02:22.118596 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:22.118601 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:22.118607 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.118613 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.118619 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:22.118624 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-28 00:02:22.118630 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.118636 | orchestrator | + size = 20 2026-03-28 00:02:22.118642 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:22.118647 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:22.118654 | orchestrator | } 2026-03-28 00:02:22.118931 | orchestrator | 2026-03-28 00:02:22.118959 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-28 00:02:22.118967 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-28 00:02:22.118973 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:22.118978 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:22.118984 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:22.118989 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.118995 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.119001 | orchestrator | + config_drive = true 2026-03-28 00:02:22.119006 | orchestrator | + created = (known after apply) 2026-03-28 00:02:22.119013 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:22.119019 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-28 00:02:22.119025 | orchestrator | + force_delete = false 2026-03-28 00:02:22.119031 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:22.119037 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.119042 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.119048 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:22.119054 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:22.119060 | orchestrator | + name = "testbed-manager" 2026-03-28 00:02:22.119065 | orchestrator | + power_state = "active" 2026-03-28 00:02:22.119071 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.119076 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:22.119082 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:22.119088 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:22.119094 | orchestrator | + user_data = (sensitive value) 2026-03-28 00:02:22.119100 | orchestrator | 2026-03-28 00:02:22.119106 | orchestrator | + block_device { 2026-03-28 00:02:22.119113 | orchestrator | + boot_index = 0 2026-03-28 00:02:22.119119 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:22.119133 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:22.119139 | orchestrator | + multiattach = false 2026-03-28 00:02:22.119145 | orchestrator | + source_type = "volume" 2026-03-28 00:02:22.119152 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.119166 | orchestrator | } 2026-03-28 00:02:22.119170 | orchestrator | 2026-03-28 00:02:22.119174 | orchestrator | + network { 2026-03-28 00:02:22.119178 | orchestrator | + access_network = false 2026-03-28 00:02:22.119182 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:22.119185 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:22.119189 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:22.119193 | orchestrator | + name = (known after apply) 2026-03-28 00:02:22.119197 | orchestrator | + port = (known after apply) 2026-03-28 00:02:22.119201 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.119205 | orchestrator | } 2026-03-28 00:02:22.119208 | orchestrator | } 2026-03-28 00:02:22.119458 | orchestrator | 2026-03-28 00:02:22.119474 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-28 00:02:22.119479 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:22.119483 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:22.119487 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:22.119490 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:22.119494 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.119498 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.119502 | orchestrator | + config_drive = true 2026-03-28 00:02:22.119505 | orchestrator | + created = (known after apply) 2026-03-28 00:02:22.119509 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:22.119513 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:22.119517 | orchestrator | + force_delete = false 2026-03-28 00:02:22.119521 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:22.119524 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.119528 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.119532 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:22.119536 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:22.119540 | orchestrator | + name = "testbed-node-0" 2026-03-28 00:02:22.119543 | orchestrator | + power_state = "active" 2026-03-28 00:02:22.119547 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.119551 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:22.119555 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:22.119558 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:22.119562 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:22.119566 | orchestrator | 2026-03-28 00:02:22.119570 | orchestrator | + block_device { 2026-03-28 00:02:22.119574 | orchestrator | + boot_index = 0 2026-03-28 00:02:22.119578 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:22.119581 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:22.119585 | orchestrator | + multiattach = false 2026-03-28 00:02:22.119589 | orchestrator | + source_type = "volume" 2026-03-28 00:02:22.119593 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.119597 | orchestrator | } 2026-03-28 00:02:22.119600 | orchestrator | 2026-03-28 00:02:22.119604 | orchestrator | + network { 2026-03-28 00:02:22.119608 | orchestrator | + access_network = false 2026-03-28 00:02:22.119612 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:22.119616 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:22.119620 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:22.119623 | orchestrator | + name = (known after apply) 2026-03-28 00:02:22.119627 | orchestrator | + port = (known after apply) 2026-03-28 00:02:22.119631 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.119635 | orchestrator | } 2026-03-28 00:02:22.119639 | orchestrator | } 2026-03-28 00:02:22.119865 | orchestrator | 2026-03-28 00:02:22.119878 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-28 00:02:22.119883 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:22.119887 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:22.119898 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:22.119902 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:22.119906 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.119909 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.119913 | orchestrator | + config_drive = true 2026-03-28 00:02:22.119917 | orchestrator | + created = (known after apply) 2026-03-28 00:02:22.119920 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:22.119924 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:22.119928 | orchestrator | + force_delete = false 2026-03-28 00:02:22.119932 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:22.119936 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.119939 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.119943 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:22.119947 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:22.119951 | orchestrator | + name = "testbed-node-1" 2026-03-28 00:02:22.119955 | orchestrator | + power_state = "active" 2026-03-28 00:02:22.119958 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.119962 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:22.119966 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:22.119970 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:22.119974 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:22.119978 | orchestrator | 2026-03-28 00:02:22.119981 | orchestrator | + block_device { 2026-03-28 00:02:22.119985 | orchestrator | + boot_index = 0 2026-03-28 00:02:22.119989 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:22.119993 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:22.119996 | orchestrator | + multiattach = false 2026-03-28 00:02:22.120000 | orchestrator | + source_type = "volume" 2026-03-28 00:02:22.120004 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.120008 | orchestrator | } 2026-03-28 00:02:22.120012 | orchestrator | 2026-03-28 00:02:22.120015 | orchestrator | + network { 2026-03-28 00:02:22.120019 | orchestrator | + access_network = false 2026-03-28 00:02:22.120023 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:22.120027 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:22.120030 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:22.120034 | orchestrator | + name = (known after apply) 2026-03-28 00:02:22.120038 | orchestrator | + port = (known after apply) 2026-03-28 00:02:22.120042 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.120046 | orchestrator | } 2026-03-28 00:02:22.120049 | orchestrator | } 2026-03-28 00:02:22.120226 | orchestrator | 2026-03-28 00:02:22.120238 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-28 00:02:22.120242 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:22.120246 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:22.120250 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:22.120255 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:22.120259 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.120267 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.120271 | orchestrator | + config_drive = true 2026-03-28 00:02:22.120275 | orchestrator | + created = (known after apply) 2026-03-28 00:02:22.120279 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:22.120283 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:22.120287 | orchestrator | + force_delete = false 2026-03-28 00:02:22.120290 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:22.120294 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.120298 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.120305 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:22.120309 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:22.120312 | orchestrator | + name = "testbed-node-2" 2026-03-28 00:02:22.120316 | orchestrator | + power_state = "active" 2026-03-28 00:02:22.120320 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.120324 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:22.120327 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:22.120331 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:22.120335 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:22.120339 | orchestrator | 2026-03-28 00:02:22.120343 | orchestrator | + block_device { 2026-03-28 00:02:22.120346 | orchestrator | + boot_index = 0 2026-03-28 00:02:22.120350 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:22.120354 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:22.120358 | orchestrator | + multiattach = false 2026-03-28 00:02:22.120361 | orchestrator | + source_type = "volume" 2026-03-28 00:02:22.120365 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.120385 | orchestrator | } 2026-03-28 00:02:22.120389 | orchestrator | 2026-03-28 00:02:22.120393 | orchestrator | + network { 2026-03-28 00:02:22.120396 | orchestrator | + access_network = false 2026-03-28 00:02:22.120400 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:22.120404 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:22.120408 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:22.120411 | orchestrator | + name = (known after apply) 2026-03-28 00:02:22.120415 | orchestrator | + port = (known after apply) 2026-03-28 00:02:22.120419 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.120423 | orchestrator | } 2026-03-28 00:02:22.120426 | orchestrator | } 2026-03-28 00:02:22.120604 | orchestrator | 2026-03-28 00:02:22.120615 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-28 00:02:22.120619 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:22.120623 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:22.120627 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:22.120631 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:22.120635 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.120638 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.120642 | orchestrator | + config_drive = true 2026-03-28 00:02:22.120646 | orchestrator | + created = (known after apply) 2026-03-28 00:02:22.120650 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:22.120654 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:22.120657 | orchestrator | + force_delete = false 2026-03-28 00:02:22.120661 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:22.120665 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.120669 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.120673 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:22.120677 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:22.120680 | orchestrator | + name = "testbed-node-3" 2026-03-28 00:02:22.120684 | orchestrator | + power_state = "active" 2026-03-28 00:02:22.120688 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.120692 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:22.120696 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:22.120700 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:22.120703 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:22.120707 | orchestrator | 2026-03-28 00:02:22.120711 | orchestrator | + block_device { 2026-03-28 00:02:22.120718 | orchestrator | + boot_index = 0 2026-03-28 00:02:22.120722 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:22.120726 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:22.120733 | orchestrator | + multiattach = false 2026-03-28 00:02:22.120737 | orchestrator | + source_type = "volume" 2026-03-28 00:02:22.120741 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.120745 | orchestrator | } 2026-03-28 00:02:22.120748 | orchestrator | 2026-03-28 00:02:22.120752 | orchestrator | + network { 2026-03-28 00:02:22.120756 | orchestrator | + access_network = false 2026-03-28 00:02:22.120760 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:22.120764 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:22.120767 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:22.120771 | orchestrator | + name = (known after apply) 2026-03-28 00:02:22.120775 | orchestrator | + port = (known after apply) 2026-03-28 00:02:22.120779 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.120782 | orchestrator | } 2026-03-28 00:02:22.120786 | orchestrator | } 2026-03-28 00:02:22.120962 | orchestrator | 2026-03-28 00:02:22.120973 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-28 00:02:22.120977 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:22.120981 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:22.120985 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:22.120989 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:22.120993 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.120997 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.121000 | orchestrator | + config_drive = true 2026-03-28 00:02:22.121004 | orchestrator | + created = (known after apply) 2026-03-28 00:02:22.121008 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:22.121012 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:22.121016 | orchestrator | + force_delete = false 2026-03-28 00:02:22.121019 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:22.121023 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.121027 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.121031 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:22.121035 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:22.121039 | orchestrator | + name = "testbed-node-4" 2026-03-28 00:02:22.121042 | orchestrator | + power_state = "active" 2026-03-28 00:02:22.121046 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.121050 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:22.121054 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:22.121058 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:22.121062 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:22.121066 | orchestrator | 2026-03-28 00:02:22.121070 | orchestrator | + block_device { 2026-03-28 00:02:22.121073 | orchestrator | + boot_index = 0 2026-03-28 00:02:22.121077 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:22.121081 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:22.121085 | orchestrator | + multiattach = false 2026-03-28 00:02:22.121089 | orchestrator | + source_type = "volume" 2026-03-28 00:02:22.121092 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.121096 | orchestrator | } 2026-03-28 00:02:22.121100 | orchestrator | 2026-03-28 00:02:22.121104 | orchestrator | + network { 2026-03-28 00:02:22.121108 | orchestrator | + access_network = false 2026-03-28 00:02:22.121112 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:22.121115 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:22.121119 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:22.121123 | orchestrator | + name = (known after apply) 2026-03-28 00:02:22.121127 | orchestrator | + port = (known after apply) 2026-03-28 00:02:22.121131 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.121136 | orchestrator | } 2026-03-28 00:02:22.121142 | orchestrator | } 2026-03-28 00:02:22.121414 | orchestrator | 2026-03-28 00:02:22.121441 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-28 00:02:22.121448 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:22.121453 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:22.121459 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:22.121465 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:22.121471 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.121478 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:22.121482 | orchestrator | + config_drive = true 2026-03-28 00:02:22.121486 | orchestrator | + created = (known after apply) 2026-03-28 00:02:22.121490 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:22.121494 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:22.121497 | orchestrator | + force_delete = false 2026-03-28 00:02:22.121506 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:22.121510 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.121514 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:22.121517 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:22.121521 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:22.121525 | orchestrator | + name = "testbed-node-5" 2026-03-28 00:02:22.121529 | orchestrator | + power_state = "active" 2026-03-28 00:02:22.121532 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.121536 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:22.121540 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:22.121544 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:22.121548 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:22.121551 | orchestrator | 2026-03-28 00:02:22.121555 | orchestrator | + block_device { 2026-03-28 00:02:22.121559 | orchestrator | + boot_index = 0 2026-03-28 00:02:22.121563 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:22.121566 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:22.121570 | orchestrator | + multiattach = false 2026-03-28 00:02:22.121574 | orchestrator | + source_type = "volume" 2026-03-28 00:02:22.121577 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.121581 | orchestrator | } 2026-03-28 00:02:22.121588 | orchestrator | 2026-03-28 00:02:22.121594 | orchestrator | + network { 2026-03-28 00:02:22.121600 | orchestrator | + access_network = false 2026-03-28 00:02:22.121605 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:22.121611 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:22.121617 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:22.121623 | orchestrator | + name = (known after apply) 2026-03-28 00:02:22.121629 | orchestrator | + port = (known after apply) 2026-03-28 00:02:22.121635 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:22.121641 | orchestrator | } 2026-03-28 00:02:22.121647 | orchestrator | } 2026-03-28 00:02:22.121729 | orchestrator | 2026-03-28 00:02:22.121747 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-28 00:02:22.121755 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-28 00:02:22.121759 | orchestrator | + fingerprint = (known after apply) 2026-03-28 00:02:22.121762 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.121766 | orchestrator | + name = "testbed" 2026-03-28 00:02:22.121770 | orchestrator | + private_key = (sensitive value) 2026-03-28 00:02:22.121774 | orchestrator | + public_key = (known after apply) 2026-03-28 00:02:22.121777 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.121781 | orchestrator | + user_id = (known after apply) 2026-03-28 00:02:22.121785 | orchestrator | } 2026-03-28 00:02:22.121826 | orchestrator | 2026-03-28 00:02:22.121837 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-28 00:02:22.121842 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:22.121852 | orchestrator | + device = (known after apply) 2026-03-28 00:02:22.121856 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.121860 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:22.121865 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.121872 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:22.121878 | orchestrator | } 2026-03-28 00:02:22.121940 | orchestrator | 2026-03-28 00:02:22.121958 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-28 00:02:22.121966 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:22.121970 | orchestrator | + device = (known after apply) 2026-03-28 00:02:22.121974 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.121978 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:22.121982 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.121985 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:22.121989 | orchestrator | } 2026-03-28 00:02:22.122053 | orchestrator | 2026-03-28 00:02:22.122065 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-28 00:02:22.122070 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:22.122073 | orchestrator | + device = (known after apply) 2026-03-28 00:02:22.122078 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.122081 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:22.122085 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.122089 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:22.122092 | orchestrator | } 2026-03-28 00:02:22.122131 | orchestrator | 2026-03-28 00:02:22.122141 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-28 00:02:22.122146 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:22.122149 | orchestrator | + device = (known after apply) 2026-03-28 00:02:22.122153 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.122157 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:22.122161 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.122164 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:22.122168 | orchestrator | } 2026-03-28 00:02:22.122202 | orchestrator | 2026-03-28 00:02:22.122212 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-28 00:02:22.122217 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:22.122220 | orchestrator | + device = (known after apply) 2026-03-28 00:02:22.122224 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.122228 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:22.122240 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.122244 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:22.122247 | orchestrator | } 2026-03-28 00:02:22.122289 | orchestrator | 2026-03-28 00:02:22.122300 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-28 00:02:22.122304 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:22.122308 | orchestrator | + device = (known after apply) 2026-03-28 00:02:22.122312 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.122316 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:22.122320 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.122323 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:22.122327 | orchestrator | } 2026-03-28 00:02:22.122363 | orchestrator | 2026-03-28 00:02:22.122406 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-28 00:02:22.122411 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:22.122415 | orchestrator | + device = (known after apply) 2026-03-28 00:02:22.122419 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.122423 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:22.122427 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.122437 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:22.122441 | orchestrator | } 2026-03-28 00:02:22.122484 | orchestrator | 2026-03-28 00:02:22.122495 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-28 00:02:22.122499 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:22.122503 | orchestrator | + device = (known after apply) 2026-03-28 00:02:22.122507 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.122511 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:22.122515 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.122518 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:22.122522 | orchestrator | } 2026-03-28 00:02:22.122559 | orchestrator | 2026-03-28 00:02:22.122570 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-28 00:02:22.122574 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:22.122578 | orchestrator | + device = (known after apply) 2026-03-28 00:02:22.122582 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.122585 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:22.122589 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.122593 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:22.122597 | orchestrator | } 2026-03-28 00:02:22.122635 | orchestrator | 2026-03-28 00:02:22.122645 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-28 00:02:22.122651 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-28 00:02:22.122654 | orchestrator | + fixed_ip = (known after apply) 2026-03-28 00:02:22.122658 | orchestrator | + floating_ip = (known after apply) 2026-03-28 00:02:22.122662 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.122666 | orchestrator | + port_id = (known after apply) 2026-03-28 00:02:22.122670 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.122677 | orchestrator | } 2026-03-28 00:02:22.122779 | orchestrator | 2026-03-28 00:02:22.122799 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-28 00:02:22.122807 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-28 00:02:22.122813 | orchestrator | + address = (known after apply) 2026-03-28 00:02:22.122819 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.122825 | orchestrator | + dns_domain = (known after apply) 2026-03-28 00:02:22.122831 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:22.122836 | orchestrator | + fixed_ip = (known after apply) 2026-03-28 00:02:22.122842 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.122849 | orchestrator | + pool = "public" 2026-03-28 00:02:22.122854 | orchestrator | + port_id = (known after apply) 2026-03-28 00:02:22.122860 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.122865 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:22.122871 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.122876 | orchestrator | } 2026-03-28 00:02:22.123015 | orchestrator | 2026-03-28 00:02:22.123035 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-28 00:02:22.123041 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-28 00:02:22.123047 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:22.123053 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.123059 | orchestrator | + availability_zone_hints = [ 2026-03-28 00:02:22.123065 | orchestrator | + "nova", 2026-03-28 00:02:22.123070 | orchestrator | ] 2026-03-28 00:02:22.123076 | orchestrator | + dns_domain = (known after apply) 2026-03-28 00:02:22.123082 | orchestrator | + external = (known after apply) 2026-03-28 00:02:22.123087 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.123093 | orchestrator | + mtu = (known after apply) 2026-03-28 00:02:22.123099 | orchestrator | + name = "net-testbed-management" 2026-03-28 00:02:22.123104 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:22.123119 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:22.123125 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.123130 | orchestrator | + shared = (known after apply) 2026-03-28 00:02:22.123136 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.123142 | orchestrator | + transparent_vlan = (known after apply) 2026-03-28 00:02:22.123148 | orchestrator | 2026-03-28 00:02:22.123154 | orchestrator | + segments (known after apply) 2026-03-28 00:02:22.123159 | orchestrator | } 2026-03-28 00:02:22.123362 | orchestrator | 2026-03-28 00:02:22.123402 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-28 00:02:22.123409 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-28 00:02:22.123414 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:22.123421 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:22.123427 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:22.123441 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.123448 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:22.123453 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:22.123459 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:22.123466 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:22.123472 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.123477 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:22.123483 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:22.123489 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:22.123495 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:22.123501 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.123507 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:22.123513 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.123520 | orchestrator | 2026-03-28 00:02:22.123526 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.123532 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:22.123538 | orchestrator | } 2026-03-28 00:02:22.123544 | orchestrator | 2026-03-28 00:02:22.123550 | orchestrator | + binding (known after apply) 2026-03-28 00:02:22.123556 | orchestrator | 2026-03-28 00:02:22.123562 | orchestrator | + fixed_ip { 2026-03-28 00:02:22.123568 | orchestrator | + ip_address = "192.168.16.5" 2026-03-28 00:02:22.123574 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:22.123580 | orchestrator | } 2026-03-28 00:02:22.123586 | orchestrator | } 2026-03-28 00:02:22.123775 | orchestrator | 2026-03-28 00:02:22.123791 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-28 00:02:22.123795 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:22.123799 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:22.123803 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:22.123807 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:22.123811 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.123815 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:22.123818 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:22.123822 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:22.123826 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:22.123829 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.123833 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:22.123837 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:22.123841 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:22.123844 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:22.123848 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.123860 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:22.123864 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.123867 | orchestrator | 2026-03-28 00:02:22.123871 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.123875 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:22.123879 | orchestrator | } 2026-03-28 00:02:22.123883 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.123887 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:22.123890 | orchestrator | } 2026-03-28 00:02:22.123894 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.123898 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:22.123902 | orchestrator | } 2026-03-28 00:02:22.123906 | orchestrator | 2026-03-28 00:02:22.123909 | orchestrator | + binding (known after apply) 2026-03-28 00:02:22.123913 | orchestrator | 2026-03-28 00:02:22.123917 | orchestrator | + fixed_ip { 2026-03-28 00:02:22.123921 | orchestrator | + ip_address = "192.168.16.10" 2026-03-28 00:02:22.123925 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:22.123929 | orchestrator | } 2026-03-28 00:02:22.123932 | orchestrator | } 2026-03-28 00:02:22.124072 | orchestrator | 2026-03-28 00:02:22.124083 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-28 00:02:22.124087 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:22.124091 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:22.124095 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:22.124099 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:22.124102 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.124106 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:22.124110 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:22.124114 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:22.124117 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:22.124121 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.124125 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:22.124128 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:22.124132 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:22.124136 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:22.124140 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.124143 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:22.124147 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.124151 | orchestrator | 2026-03-28 00:02:22.124154 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.124158 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:22.124162 | orchestrator | } 2026-03-28 00:02:22.124166 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.124170 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:22.124173 | orchestrator | } 2026-03-28 00:02:22.124177 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.124181 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:22.124185 | orchestrator | } 2026-03-28 00:02:22.124188 | orchestrator | 2026-03-28 00:02:22.124192 | orchestrator | + binding (known after apply) 2026-03-28 00:02:22.124196 | orchestrator | 2026-03-28 00:02:22.124200 | orchestrator | + fixed_ip { 2026-03-28 00:02:22.124203 | orchestrator | + ip_address = "192.168.16.11" 2026-03-28 00:02:22.124207 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:22.124211 | orchestrator | } 2026-03-28 00:02:22.124215 | orchestrator | } 2026-03-28 00:02:22.124347 | orchestrator | 2026-03-28 00:02:22.124358 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-28 00:02:22.124362 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:22.124366 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:22.124412 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:22.124418 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:22.124424 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.124437 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:22.124443 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:22.124449 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:22.124455 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:22.124468 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.124474 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:22.124482 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:22.124486 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:22.124489 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:22.124493 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.124497 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:22.124501 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.124504 | orchestrator | 2026-03-28 00:02:22.124508 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.124512 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:22.124516 | orchestrator | } 2026-03-28 00:02:22.124520 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.124523 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:22.124527 | orchestrator | } 2026-03-28 00:02:22.124531 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.124535 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:22.124538 | orchestrator | } 2026-03-28 00:02:22.124542 | orchestrator | 2026-03-28 00:02:22.124546 | orchestrator | + binding (known after apply) 2026-03-28 00:02:22.124550 | orchestrator | 2026-03-28 00:02:22.124554 | orchestrator | + fixed_ip { 2026-03-28 00:02:22.124557 | orchestrator | + ip_address = "192.168.16.12" 2026-03-28 00:02:22.124561 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:22.124565 | orchestrator | } 2026-03-28 00:02:22.124569 | orchestrator | } 2026-03-28 00:02:22.124712 | orchestrator | 2026-03-28 00:02:22.124724 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-28 00:02:22.124729 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:22.124733 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:22.124737 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:22.124740 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:22.124744 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.124748 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:22.124752 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:22.124755 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:22.124759 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:22.124763 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.124767 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:22.124770 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:22.124774 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:22.124778 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:22.124781 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.124785 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:22.124789 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.124793 | orchestrator | 2026-03-28 00:02:22.124797 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.124801 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:22.124804 | orchestrator | } 2026-03-28 00:02:22.124808 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.124812 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:22.124816 | orchestrator | } 2026-03-28 00:02:22.124819 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.124823 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:22.124827 | orchestrator | } 2026-03-28 00:02:22.124831 | orchestrator | 2026-03-28 00:02:22.124839 | orchestrator | + binding (known after apply) 2026-03-28 00:02:22.124843 | orchestrator | 2026-03-28 00:02:22.124846 | orchestrator | + fixed_ip { 2026-03-28 00:02:22.124850 | orchestrator | + ip_address = "192.168.16.13" 2026-03-28 00:02:22.124854 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:22.124858 | orchestrator | } 2026-03-28 00:02:22.124862 | orchestrator | } 2026-03-28 00:02:22.125000 | orchestrator | 2026-03-28 00:02:22.125011 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-28 00:02:22.125015 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:22.125019 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:22.125023 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:22.125026 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:22.125030 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.125034 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:22.125038 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:22.125041 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:22.125045 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:22.125049 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.125052 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:22.125056 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:22.125060 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:22.125064 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:22.125068 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.125071 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:22.125075 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.125081 | orchestrator | 2026-03-28 00:02:22.125085 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.125088 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:22.125092 | orchestrator | } 2026-03-28 00:02:22.125096 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.125100 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:22.125103 | orchestrator | } 2026-03-28 00:02:22.125107 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.125111 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:22.125115 | orchestrator | } 2026-03-28 00:02:22.125118 | orchestrator | 2026-03-28 00:02:22.125122 | orchestrator | + binding (known after apply) 2026-03-28 00:02:22.125126 | orchestrator | 2026-03-28 00:02:22.125130 | orchestrator | + fixed_ip { 2026-03-28 00:02:22.125135 | orchestrator | + ip_address = "192.168.16.14" 2026-03-28 00:02:22.125141 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:22.125147 | orchestrator | } 2026-03-28 00:02:22.125153 | orchestrator | } 2026-03-28 00:02:22.125309 | orchestrator | 2026-03-28 00:02:22.125323 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-28 00:02:22.125327 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:22.125331 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:22.125335 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:22.125339 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:22.125342 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.125346 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:22.125350 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:22.125354 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:22.125357 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:22.125361 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.125365 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:22.125385 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:22.125389 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:22.125392 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:22.125401 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.125405 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:22.125408 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.125412 | orchestrator | 2026-03-28 00:02:22.125416 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.125420 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:22.125423 | orchestrator | } 2026-03-28 00:02:22.125427 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.125431 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:22.125435 | orchestrator | } 2026-03-28 00:02:22.125438 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:22.125442 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:22.125446 | orchestrator | } 2026-03-28 00:02:22.125450 | orchestrator | 2026-03-28 00:02:22.125457 | orchestrator | + binding (known after apply) 2026-03-28 00:02:22.125461 | orchestrator | 2026-03-28 00:02:22.125465 | orchestrator | + fixed_ip { 2026-03-28 00:02:22.125469 | orchestrator | + ip_address = "192.168.16.15" 2026-03-28 00:02:22.125472 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:22.125476 | orchestrator | } 2026-03-28 00:02:22.125480 | orchestrator | } 2026-03-28 00:02:22.125526 | orchestrator | 2026-03-28 00:02:22.125537 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-28 00:02:22.125541 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-28 00:02:22.125545 | orchestrator | + force_destroy = false 2026-03-28 00:02:22.125549 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.125553 | orchestrator | + port_id = (known after apply) 2026-03-28 00:02:22.125556 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.125560 | orchestrator | + router_id = (known after apply) 2026-03-28 00:02:22.125564 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:22.125568 | orchestrator | } 2026-03-28 00:02:22.125658 | orchestrator | 2026-03-28 00:02:22.125669 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-28 00:02:22.125674 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-28 00:02:22.125679 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:22.125687 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.125691 | orchestrator | + availability_zone_hints = [ 2026-03-28 00:02:22.125694 | orchestrator | + "nova", 2026-03-28 00:02:22.125698 | orchestrator | ] 2026-03-28 00:02:22.125702 | orchestrator | + distributed = (known after apply) 2026-03-28 00:02:22.125706 | orchestrator | + enable_snat = (known after apply) 2026-03-28 00:02:22.125710 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-28 00:02:22.125713 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-28 00:02:22.125717 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.125723 | orchestrator | + name = "testbed" 2026-03-28 00:02:22.125730 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.125734 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.125739 | orchestrator | 2026-03-28 00:02:22.125745 | orchestrator | + external_fixed_ip (known after apply) 2026-03-28 00:02:22.125751 | orchestrator | } 2026-03-28 00:02:22.125829 | orchestrator | 2026-03-28 00:02:22.125840 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-28 00:02:22.125845 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-28 00:02:22.125849 | orchestrator | + description = "ssh" 2026-03-28 00:02:22.125853 | orchestrator | + direction = "ingress" 2026-03-28 00:02:22.125857 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:22.125860 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.125864 | orchestrator | + port_range_max = 22 2026-03-28 00:02:22.125868 | orchestrator | + port_range_min = 22 2026-03-28 00:02:22.125872 | orchestrator | + protocol = "tcp" 2026-03-28 00:02:22.125876 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.125887 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:22.125891 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:22.125895 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:22.125899 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:22.125903 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.125907 | orchestrator | } 2026-03-28 00:02:22.125984 | orchestrator | 2026-03-28 00:02:22.125995 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-28 00:02:22.125999 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-28 00:02:22.126003 | orchestrator | + description = "wireguard" 2026-03-28 00:02:22.126007 | orchestrator | + direction = "ingress" 2026-03-28 00:02:22.126011 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:22.126041 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.126045 | orchestrator | + port_range_max = 51820 2026-03-28 00:02:22.126048 | orchestrator | + port_range_min = 51820 2026-03-28 00:02:22.126052 | orchestrator | + protocol = "udp" 2026-03-28 00:02:22.126056 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.126060 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:22.126064 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:22.126067 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:22.126072 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:22.126076 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.126080 | orchestrator | } 2026-03-28 00:02:22.126147 | orchestrator | 2026-03-28 00:02:22.126158 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-28 00:02:22.126162 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-28 00:02:22.126166 | orchestrator | + direction = "ingress" 2026-03-28 00:02:22.126170 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:22.126174 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.126177 | orchestrator | + protocol = "tcp" 2026-03-28 00:02:22.126181 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.126185 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:22.126189 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:22.126193 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-28 00:02:22.126196 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:22.126200 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.126204 | orchestrator | } 2026-03-28 00:02:22.126264 | orchestrator | 2026-03-28 00:02:22.126274 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-28 00:02:22.126279 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-28 00:02:22.126283 | orchestrator | + direction = "ingress" 2026-03-28 00:02:22.126286 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:22.126290 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.126294 | orchestrator | + protocol = "udp" 2026-03-28 00:02:22.126298 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.126301 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:22.126305 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:22.126309 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-28 00:02:22.126313 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:22.126317 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.126320 | orchestrator | } 2026-03-28 00:02:22.126392 | orchestrator | 2026-03-28 00:02:22.126403 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-28 00:02:22.126412 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-28 00:02:22.126416 | orchestrator | + direction = "ingress" 2026-03-28 00:02:22.126419 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:22.126423 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.126427 | orchestrator | + protocol = "icmp" 2026-03-28 00:02:22.126431 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.126435 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:22.126438 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:22.126442 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:22.126446 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:22.126450 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.126454 | orchestrator | } 2026-03-28 00:02:22.126515 | orchestrator | 2026-03-28 00:02:22.126526 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-28 00:02:22.126530 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-28 00:02:22.126534 | orchestrator | + direction = "ingress" 2026-03-28 00:02:22.126538 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:22.126541 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.126545 | orchestrator | + protocol = "tcp" 2026-03-28 00:02:22.126549 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.126553 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:22.126560 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:22.126564 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:22.126568 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:22.126572 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.126576 | orchestrator | } 2026-03-28 00:02:22.126638 | orchestrator | 2026-03-28 00:02:22.126648 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-28 00:02:22.126653 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-28 00:02:22.126657 | orchestrator | + direction = "ingress" 2026-03-28 00:02:22.126660 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:22.126664 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.126668 | orchestrator | + protocol = "udp" 2026-03-28 00:02:22.126672 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.126676 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:22.126679 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:22.126683 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:22.126687 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:22.126691 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.126695 | orchestrator | } 2026-03-28 00:02:22.126756 | orchestrator | 2026-03-28 00:02:22.126767 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-28 00:02:22.126771 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-28 00:02:22.126775 | orchestrator | + direction = "ingress" 2026-03-28 00:02:22.126784 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:22.126788 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.126792 | orchestrator | + protocol = "icmp" 2026-03-28 00:02:22.126795 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.126799 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:22.126803 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:22.126807 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:22.126810 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:22.126814 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.126822 | orchestrator | } 2026-03-28 00:02:22.126890 | orchestrator | 2026-03-28 00:02:22.126901 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-28 00:02:22.126905 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-28 00:02:22.126909 | orchestrator | + description = "vrrp" 2026-03-28 00:02:22.126912 | orchestrator | + direction = "ingress" 2026-03-28 00:02:22.126916 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:22.126920 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.126924 | orchestrator | + protocol = "112" 2026-03-28 00:02:22.126927 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.126931 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:22.126935 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:22.126939 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:22.126943 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:22.126946 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.126950 | orchestrator | } 2026-03-28 00:02:22.126994 | orchestrator | 2026-03-28 00:02:22.127005 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-28 00:02:22.127009 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-28 00:02:22.127013 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.127017 | orchestrator | + description = "management security group" 2026-03-28 00:02:22.127020 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.127024 | orchestrator | + name = "testbed-management" 2026-03-28 00:02:22.127028 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.127032 | orchestrator | + stateful = (known after apply) 2026-03-28 00:02:22.127036 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.127039 | orchestrator | } 2026-03-28 00:02:22.127084 | orchestrator | 2026-03-28 00:02:22.127094 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-28 00:02:22.127098 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-28 00:02:22.127102 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.127106 | orchestrator | + description = "node security group" 2026-03-28 00:02:22.127110 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.127114 | orchestrator | + name = "testbed-node" 2026-03-28 00:02:22.127117 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.127121 | orchestrator | + stateful = (known after apply) 2026-03-28 00:02:22.127125 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.127129 | orchestrator | } 2026-03-28 00:02:22.127229 | orchestrator | 2026-03-28 00:02:22.127240 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-28 00:02:22.127244 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-28 00:02:22.127248 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:22.127252 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-28 00:02:22.127256 | orchestrator | + dns_nameservers = [ 2026-03-28 00:02:22.127260 | orchestrator | + "8.8.8.8", 2026-03-28 00:02:22.127263 | orchestrator | + "9.9.9.9", 2026-03-28 00:02:22.127267 | orchestrator | ] 2026-03-28 00:02:22.127271 | orchestrator | + enable_dhcp = true 2026-03-28 00:02:22.127275 | orchestrator | + gateway_ip = (known after apply) 2026-03-28 00:02:22.127279 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.127282 | orchestrator | + ip_version = 4 2026-03-28 00:02:22.127286 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-28 00:02:22.127290 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-28 00:02:22.127294 | orchestrator | + name = "subnet-testbed-management" 2026-03-28 00:02:22.127298 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:22.127301 | orchestrator | + no_gateway = false 2026-03-28 00:02:22.127305 | orchestrator | + region = (known after apply) 2026-03-28 00:02:22.127309 | orchestrator | + service_types = (known after apply) 2026-03-28 00:02:22.127316 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:22.127320 | orchestrator | 2026-03-28 00:02:22.127324 | orchestrator | + allocation_pool { 2026-03-28 00:02:22.127327 | orchestrator | + end = "192.168.31.250" 2026-03-28 00:02:22.127331 | orchestrator | + start = "192.168.31.200" 2026-03-28 00:02:22.127335 | orchestrator | } 2026-03-28 00:02:22.127339 | orchestrator | } 2026-03-28 00:02:22.127397 | orchestrator | 2026-03-28 00:02:22.127409 | orchestrator | # terraform_data.image will be created 2026-03-28 00:02:22.127414 | orchestrator | + resource "terraform_data" "image" { 2026-03-28 00:02:22.127418 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.127421 | orchestrator | + input = "Ubuntu 24.04" 2026-03-28 00:02:22.127425 | orchestrator | + output = (known after apply) 2026-03-28 00:02:22.127429 | orchestrator | } 2026-03-28 00:02:22.127459 | orchestrator | 2026-03-28 00:02:22.127470 | orchestrator | # terraform_data.image_node will be created 2026-03-28 00:02:22.127474 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-28 00:02:22.127478 | orchestrator | + id = (known after apply) 2026-03-28 00:02:22.127482 | orchestrator | + input = "Ubuntu 24.04" 2026-03-28 00:02:22.127486 | orchestrator | + output = (known after apply) 2026-03-28 00:02:22.127490 | orchestrator | } 2026-03-28 00:02:22.127504 | orchestrator | 2026-03-28 00:02:22.127508 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-28 00:02:22.127519 | orchestrator | 2026-03-28 00:02:22.127523 | orchestrator | Changes to Outputs: 2026-03-28 00:02:22.127533 | orchestrator | + manager_address = (sensitive value) 2026-03-28 00:02:22.127537 | orchestrator | + private_key = (sensitive value) 2026-03-28 00:02:22.393620 | orchestrator | terraform_data.image: Creating... 2026-03-28 00:02:22.393719 | orchestrator | terraform_data.image: Creation complete after 0s [id=6b08ad87-3c0e-ae9e-7c6c-6b835189f45d] 2026-03-28 00:02:22.396830 | orchestrator | terraform_data.image_node: Creating... 2026-03-28 00:02:22.397830 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=95804f83-ea6c-4daa-8780-55c4675dea40] 2026-03-28 00:02:22.403628 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-28 00:02:22.403852 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-28 00:02:22.411537 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-28 00:02:22.412609 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-28 00:02:22.412636 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-28 00:02:22.418067 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-28 00:02:22.418131 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-28 00:02:22.419043 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-28 00:02:22.421730 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-28 00:02:22.423071 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-28 00:02:22.944180 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-28 00:02:22.950305 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-28 00:02:22.972493 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-28 00:02:22.978311 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-28 00:02:23.121684 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-28 00:02:23.128183 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-28 00:02:23.691874 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=1fddbaf2-3ddf-4055-b360-eb2982a2e4c5] 2026-03-28 00:02:23.703289 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-28 00:02:26.220401 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=adec6741-41cb-49e2-9389-e6d1302151a0] 2026-03-28 00:02:26.226629 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-28 00:02:26.229401 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=b9aebbdd-9418-41ff-9099-90b7dcb703f9] 2026-03-28 00:02:26.240436 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-28 00:02:26.258603 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=64213c7d-5962-413c-aa45-2f60eed78f32] 2026-03-28 00:02:26.265288 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-28 00:02:26.297841 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=f8ddcfbb-f935-4942-af25-8ac280f1cc67] 2026-03-28 00:02:26.310082 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-28 00:02:26.310233 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=4cb6368c-0066-4efd-8388-81f1557a02ca] 2026-03-28 00:02:26.323801 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-28 00:02:26.338201 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=86e8f6ba-fcdd-41b8-9839-c0061159d97d] 2026-03-28 00:02:26.347450 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-28 00:02:26.370974 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=94eace61-73f7-4993-ae2a-02303df71bb3] 2026-03-28 00:02:26.372367 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=9560503a-139c-4329-8ffd-1ea1e0c721e5] 2026-03-28 00:02:26.386774 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-28 00:02:26.390263 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-28 00:02:26.393696 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=6cc390fb3f2d6e6526a4d4c7a887c4ea7e4feb3b] 2026-03-28 00:02:26.395545 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=2d161257ee2713e8cb81320681c32e348d51a42d] 2026-03-28 00:02:26.401688 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-28 00:02:26.415912 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=d59a946d-61ee-4c80-a151-abde4d1a3094] 2026-03-28 00:02:27.083710 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01] 2026-03-28 00:02:27.415674 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=f3dce194-3346-43c7-85fd-af4af1ecfc35] 2026-03-28 00:02:27.423341 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-28 00:02:29.713249 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=163cf866-001f-4e5b-a61a-02887cb0e3f0] 2026-03-28 00:02:29.765414 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=634fcc3a-1043-40bd-adf5-6b5290b4e5e3] 2026-03-28 00:02:29.804313 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=a4f66dec-4fd4-432f-b746-29e54df03c22] 2026-03-28 00:02:29.821216 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=7b91ecb4-57d6-4807-af9e-4fff691df09c] 2026-03-28 00:02:29.877412 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=501feac0-2064-4a35-a9ff-661eec37e0e7] 2026-03-28 00:02:29.880020 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=90f04035-d6f7-4dc7-b7f8-aa7e66258802] 2026-03-28 00:02:30.106662 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=4ec90bf0-5b48-4d29-8b93-033585b59c37] 2026-03-28 00:02:30.112929 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-28 00:02:30.114894 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-28 00:02:30.115781 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-28 00:02:30.334427 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=41dd35a6-2998-4ee4-a260-02e406e60837] 2026-03-28 00:02:30.347243 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-28 00:02:30.347340 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-28 00:02:30.349572 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-28 00:02:30.350305 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-28 00:02:30.351149 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-28 00:02:30.352261 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-28 00:02:30.373122 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=99fd2b05-31ee-412d-a3c0-93ee89a06a5e] 2026-03-28 00:02:30.378959 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-28 00:02:30.384371 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-28 00:02:30.386627 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-28 00:02:30.528865 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=ca1425b9-0f6b-46c0-81e2-94fafdd7a5d8] 2026-03-28 00:02:30.539665 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-28 00:02:30.579554 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=4ec6eb25-95de-4f76-8b1f-1dda7f73cd00] 2026-03-28 00:02:30.581266 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-28 00:02:30.868829 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=21feec3d-f4cf-4e9d-adb9-a00b28cc0751] 2026-03-28 00:02:30.884346 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-28 00:02:30.949678 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=86bc3adb-1ff6-4a78-a911-4e0321289697] 2026-03-28 00:02:30.958732 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-28 00:02:31.384676 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=cf2f0bf1-072c-4e50-991f-dd07da424770] 2026-03-28 00:02:31.395159 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-28 00:02:31.419871 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=6b43cc6c-68f6-492e-9147-a75447a41c07] 2026-03-28 00:02:31.427337 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-28 00:02:31.482951 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=95a2ea38-57cc-4c8e-8ad2-060d76162ec1] 2026-03-28 00:02:31.490528 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-28 00:02:31.582989 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=84455afc-957d-40d2-bf5d-07f5369f5eab] 2026-03-28 00:02:31.817419 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=57f6e5d7-4737-40b8-9d79-78184b233e09] 2026-03-28 00:02:31.846942 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=1f7735a5-c3be-4d90-8392-29c01ff6bd30] 2026-03-28 00:02:32.390683 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=22c4df6a-6be6-4100-bab0-28b4438a25c7] 2026-03-28 00:02:32.442400 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=83e07588-365d-4f94-ac70-94406327f6be] 2026-03-28 00:02:32.566047 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=55ceaee4-f33a-4c40-aa4c-8d37c3dc24c9] 2026-03-28 00:02:32.896480 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 3s [id=10f020e7-5500-4698-a77f-ec7795138297] 2026-03-28 00:02:33.045835 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=fab49135-ddc1-4cb4-917f-1de0a8699a49] 2026-03-28 00:02:33.297657 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=e1cb5892-15f0-40b4-9dca-a52d528fdffb] 2026-03-28 00:02:34.417169 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=7a997f69-be57-472b-bec7-0f9ff60c85a6] 2026-03-28 00:02:34.442532 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-28 00:02:34.453586 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-28 00:02:34.455360 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-28 00:02:34.456709 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-28 00:02:34.458329 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-28 00:02:34.463937 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-28 00:02:34.476597 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-28 00:02:37.322513 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=9a5069a5-8614-4070-b8a7-86814c171391] 2026-03-28 00:02:37.330350 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-28 00:02:37.337543 | orchestrator | local_file.inventory: Creating... 2026-03-28 00:02:37.340376 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-28 00:02:37.346736 | orchestrator | local_file.inventory: Creation complete after 0s [id=fb3ea57945022aec918aecfb4a83f58e14578845] 2026-03-28 00:02:37.348368 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=73eec0f6ebdafc3cb860a2ef021696a5bf506209] 2026-03-28 00:02:38.941659 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=9a5069a5-8614-4070-b8a7-86814c171391] 2026-03-28 00:02:44.455047 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-28 00:02:44.457528 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-28 00:02:44.457661 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-28 00:02:44.469893 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-28 00:02:44.469991 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-28 00:02:44.478482 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-28 00:02:54.464470 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-28 00:02:54.464600 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-28 00:02:54.464627 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-28 00:02:54.470938 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-28 00:02:54.471018 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-28 00:02:54.479453 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-28 00:03:04.473520 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-28 00:03:04.473641 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-28 00:03:04.473658 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-28 00:03:04.473670 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-28 00:03:04.473695 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-28 00:03:04.480185 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-28 00:03:05.734450 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 32s [id=f7b1887e-9976-4307-88d9-27ad75a58d45] 2026-03-28 00:03:05.800073 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 32s [id=8da0cb02-2671-4c3a-b271-5f4d79d6570f] 2026-03-28 00:03:05.822131 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 32s [id=20bf967b-1474-4f9e-875b-e209234c4ea3] 2026-03-28 00:03:05.941920 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 32s [id=bad22ba4-9852-4ae5-9085-9c5a7670fb5a] 2026-03-28 00:03:14.481618 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-28 00:03:14.481790 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-28 00:03:15.665170 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 42s [id=faf9165a-8529-4bb7-8930-0da8ff41adad] 2026-03-28 00:03:15.891574 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 42s [id=a1820ce0-0fa5-479d-9b25-cc75ace8455e] 2026-03-28 00:03:15.921039 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-28 00:03:15.921679 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-28 00:03:15.924933 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-28 00:03:15.924982 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-28 00:03:15.933827 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-28 00:03:15.940360 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-28 00:03:15.941088 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-28 00:03:15.944374 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-28 00:03:15.946556 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=6159964272449855747] 2026-03-28 00:03:15.951439 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-28 00:03:15.958253 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-28 00:03:15.969646 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-28 00:03:19.328297 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=20bf967b-1474-4f9e-875b-e209234c4ea3/f8ddcfbb-f935-4942-af25-8ac280f1cc67] 2026-03-28 00:03:19.357065 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=a1820ce0-0fa5-479d-9b25-cc75ace8455e/94eace61-73f7-4993-ae2a-02303df71bb3] 2026-03-28 00:03:19.375880 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=bad22ba4-9852-4ae5-9085-9c5a7670fb5a/86e8f6ba-fcdd-41b8-9839-c0061159d97d] 2026-03-28 00:03:19.384261 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=20bf967b-1474-4f9e-875b-e209234c4ea3/b9aebbdd-9418-41ff-9099-90b7dcb703f9] 2026-03-28 00:03:19.404108 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=bad22ba4-9852-4ae5-9085-9c5a7670fb5a/adec6741-41cb-49e2-9389-e6d1302151a0] 2026-03-28 00:03:19.407455 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=a1820ce0-0fa5-479d-9b25-cc75ace8455e/64213c7d-5962-413c-aa45-2f60eed78f32] 2026-03-28 00:03:25.500756 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=20bf967b-1474-4f9e-875b-e209234c4ea3/4cb6368c-0066-4efd-8388-81f1557a02ca] 2026-03-28 00:03:25.506662 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=a1820ce0-0fa5-479d-9b25-cc75ace8455e/9560503a-139c-4329-8ffd-1ea1e0c721e5] 2026-03-28 00:03:25.519814 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=bad22ba4-9852-4ae5-9085-9c5a7670fb5a/d59a946d-61ee-4c80-a151-abde4d1a3094] 2026-03-28 00:03:25.970629 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-28 00:03:35.971510 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-28 00:03:36.623560 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=29361091-1a3f-40bf-8652-b14f9d8c7f36] 2026-03-28 00:03:36.640103 | orchestrator | 2026-03-28 00:03:36.640188 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-28 00:03:36.640249 | orchestrator | 2026-03-28 00:03:36.640265 | orchestrator | Outputs: 2026-03-28 00:03:36.640277 | orchestrator | 2026-03-28 00:03:36.640318 | orchestrator | manager_address = 2026-03-28 00:03:36.640351 | orchestrator | private_key = 2026-03-28 00:03:36.834141 | orchestrator | ok: Runtime: 0:01:25.515818 2026-03-28 00:03:36.881311 | 2026-03-28 00:03:36.881524 | TASK [Fetch manager address] 2026-03-28 00:03:37.355199 | orchestrator | ok 2026-03-28 00:03:37.366963 | 2026-03-28 00:03:37.367107 | TASK [Set manager_host address] 2026-03-28 00:03:37.440737 | orchestrator | ok 2026-03-28 00:03:37.447687 | 2026-03-28 00:03:37.447814 | LOOP [Update ansible collections] 2026-03-28 00:03:38.389480 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 00:03:38.389750 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-28 00:03:38.389786 | orchestrator | Starting galaxy collection install process 2026-03-28 00:03:38.389810 | orchestrator | Process install dependency map 2026-03-28 00:03:38.389833 | orchestrator | Starting collection install process 2026-03-28 00:03:38.389865 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-03-28 00:03:38.389889 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-03-28 00:03:38.389921 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-28 00:03:38.389975 | orchestrator | ok: Item: commons Runtime: 0:00:00.611693 2026-03-28 00:03:39.393108 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 00:03:39.393244 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-28 00:03:39.393283 | orchestrator | Starting galaxy collection install process 2026-03-28 00:03:39.393312 | orchestrator | Process install dependency map 2026-03-28 00:03:39.393340 | orchestrator | Starting collection install process 2026-03-28 00:03:39.393366 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-03-28 00:03:39.393392 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-03-28 00:03:39.393435 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-28 00:03:39.393476 | orchestrator | ok: Item: services Runtime: 0:00:00.718917 2026-03-28 00:03:39.412124 | 2026-03-28 00:03:39.412262 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-28 00:03:49.989664 | orchestrator | ok 2026-03-28 00:03:49.999160 | 2026-03-28 00:03:49.999267 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-28 00:04:50.046764 | orchestrator | ok 2026-03-28 00:04:50.057895 | 2026-03-28 00:04:50.058051 | TASK [Fetch manager ssh hostkey] 2026-03-28 00:04:51.645922 | orchestrator | Output suppressed because no_log was given 2026-03-28 00:04:51.663739 | 2026-03-28 00:04:51.663921 | TASK [Get ssh keypair from terraform environment] 2026-03-28 00:04:52.203992 | orchestrator | ok: Runtime: 0:00:00.009278 2026-03-28 00:04:52.220714 | 2026-03-28 00:04:52.220884 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-28 00:04:52.268054 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-28 00:04:52.277720 | 2026-03-28 00:04:52.277848 | TASK [Run manager part 0] 2026-03-28 00:04:53.180102 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 00:04:53.225280 | orchestrator | 2026-03-28 00:04:53.225333 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-28 00:04:53.225340 | orchestrator | 2026-03-28 00:04:53.225353 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-28 00:04:55.779105 | orchestrator | ok: [testbed-manager] 2026-03-28 00:04:55.779165 | orchestrator | 2026-03-28 00:04:55.779192 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-28 00:04:55.779203 | orchestrator | 2026-03-28 00:04:55.779213 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:04:57.626464 | orchestrator | ok: [testbed-manager] 2026-03-28 00:04:57.636294 | orchestrator | 2026-03-28 00:04:57.636336 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-28 00:04:58.323013 | orchestrator | ok: [testbed-manager] 2026-03-28 00:04:58.323116 | orchestrator | 2026-03-28 00:04:58.323143 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-28 00:04:58.378557 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:04:58.378648 | orchestrator | 2026-03-28 00:04:58.378669 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-28 00:04:58.417619 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:04:58.417677 | orchestrator | 2026-03-28 00:04:58.417686 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-28 00:04:58.454839 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:04:58.454920 | orchestrator | 2026-03-28 00:04:58.454936 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-28 00:04:59.164920 | orchestrator | changed: [testbed-manager] 2026-03-28 00:04:59.165007 | orchestrator | 2026-03-28 00:04:59.165021 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-28 00:08:07.484311 | orchestrator | changed: [testbed-manager] 2026-03-28 00:08:07.484401 | orchestrator | 2026-03-28 00:08:07.484417 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-28 00:09:17.238965 | orchestrator | changed: [testbed-manager] 2026-03-28 00:09:17.239071 | orchestrator | 2026-03-28 00:09:17.239093 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-28 00:09:40.945663 | orchestrator | changed: [testbed-manager] 2026-03-28 00:09:40.945761 | orchestrator | 2026-03-28 00:09:40.945780 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-28 00:09:50.196293 | orchestrator | changed: [testbed-manager] 2026-03-28 00:09:50.196333 | orchestrator | 2026-03-28 00:09:50.196340 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-28 00:09:50.249583 | orchestrator | ok: [testbed-manager] 2026-03-28 00:09:50.249707 | orchestrator | 2026-03-28 00:09:50.249726 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-28 00:09:51.042425 | orchestrator | ok: [testbed-manager] 2026-03-28 00:09:51.043030 | orchestrator | 2026-03-28 00:09:51.043053 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-28 00:09:51.884791 | orchestrator | changed: [testbed-manager] 2026-03-28 00:09:51.884849 | orchestrator | 2026-03-28 00:09:51.884861 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-28 00:09:58.380215 | orchestrator | changed: [testbed-manager] 2026-03-28 00:09:58.380292 | orchestrator | 2026-03-28 00:09:58.380316 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-28 00:10:04.473326 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:04.473422 | orchestrator | 2026-03-28 00:10:04.473438 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-28 00:10:07.283599 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:07.283667 | orchestrator | 2026-03-28 00:10:07.283675 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-28 00:10:09.157290 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:09.157358 | orchestrator | 2026-03-28 00:10:09.157369 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-28 00:10:10.300837 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-28 00:10:10.300947 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-28 00:10:10.300963 | orchestrator | 2026-03-28 00:10:10.300978 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-28 00:10:10.346283 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-28 00:10:10.346366 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-28 00:10:10.346380 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-28 00:10:10.346394 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-28 00:10:13.826308 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-28 00:10:13.826397 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-28 00:10:13.826410 | orchestrator | 2026-03-28 00:10:13.826421 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-28 00:10:14.408273 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:14.408311 | orchestrator | 2026-03-28 00:10:14.408318 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-28 00:13:36.468913 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-28 00:13:36.468962 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-28 00:13:36.468971 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-28 00:13:36.468978 | orchestrator | 2026-03-28 00:13:36.468985 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-28 00:13:38.751845 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-28 00:13:38.751927 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-28 00:13:38.751941 | orchestrator | 2026-03-28 00:13:38.751955 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-28 00:13:38.751967 | orchestrator | 2026-03-28 00:13:38.751979 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:13:40.228543 | orchestrator | ok: [testbed-manager] 2026-03-28 00:13:40.228577 | orchestrator | 2026-03-28 00:13:40.228582 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-28 00:13:40.280707 | orchestrator | ok: [testbed-manager] 2026-03-28 00:13:40.280744 | orchestrator | 2026-03-28 00:13:40.280751 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-28 00:13:40.351496 | orchestrator | ok: [testbed-manager] 2026-03-28 00:13:40.351536 | orchestrator | 2026-03-28 00:13:40.351544 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-28 00:13:42.311479 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:42.312082 | orchestrator | 2026-03-28 00:13:42.312113 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-28 00:13:43.092994 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:43.093071 | orchestrator | 2026-03-28 00:13:43.093087 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-28 00:13:44.532402 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-28 00:13:44.532462 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-28 00:13:44.532474 | orchestrator | 2026-03-28 00:13:44.532484 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-28 00:13:45.920548 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:45.920588 | orchestrator | 2026-03-28 00:13:45.920607 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-28 00:13:47.695198 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:13:47.695281 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-28 00:13:47.695309 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:13:47.695321 | orchestrator | 2026-03-28 00:13:47.695334 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-28 00:13:47.753976 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:13:47.754166 | orchestrator | 2026-03-28 00:13:47.754188 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-28 00:13:47.824873 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:13:47.824983 | orchestrator | 2026-03-28 00:13:47.825000 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-28 00:13:48.405482 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:48.405604 | orchestrator | 2026-03-28 00:13:48.405620 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-28 00:13:48.465286 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:13:48.465377 | orchestrator | 2026-03-28 00:13:48.465394 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-28 00:13:49.390746 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:13:49.390791 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:49.390800 | orchestrator | 2026-03-28 00:13:49.390807 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-28 00:13:49.430431 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:13:49.430471 | orchestrator | 2026-03-28 00:13:49.430478 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-28 00:13:49.468123 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:13:49.468213 | orchestrator | 2026-03-28 00:13:49.468223 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-28 00:13:49.508585 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:13:49.508623 | orchestrator | 2026-03-28 00:13:49.508631 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-28 00:13:49.585112 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:13:49.585178 | orchestrator | 2026-03-28 00:13:49.585187 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-28 00:13:50.306592 | orchestrator | ok: [testbed-manager] 2026-03-28 00:13:50.306905 | orchestrator | 2026-03-28 00:13:50.306926 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-28 00:13:50.306939 | orchestrator | 2026-03-28 00:13:50.306953 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:13:51.712027 | orchestrator | ok: [testbed-manager] 2026-03-28 00:13:51.712063 | orchestrator | 2026-03-28 00:13:51.712069 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-28 00:13:52.708258 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:52.709291 | orchestrator | 2026-03-28 00:13:52.709318 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:13:52.709327 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-03-28 00:13:52.709334 | orchestrator | 2026-03-28 00:13:53.176383 | orchestrator | ok: Runtime: 0:09:00.268351 2026-03-28 00:13:53.192675 | 2026-03-28 00:13:53.192834 | TASK [Point out that the log in on the manager is now possible] 2026-03-28 00:13:53.239715 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-28 00:13:53.252270 | 2026-03-28 00:13:53.252419 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-28 00:13:53.299671 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-28 00:13:53.310419 | 2026-03-28 00:13:53.310545 | TASK [Run manager part 1 + 2] 2026-03-28 00:13:54.192955 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 00:13:54.250708 | orchestrator | 2026-03-28 00:13:54.250760 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-28 00:13:54.250767 | orchestrator | 2026-03-28 00:13:54.250779 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:13:57.303484 | orchestrator | ok: [testbed-manager] 2026-03-28 00:13:57.303535 | orchestrator | 2026-03-28 00:13:57.303554 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-28 00:13:57.342196 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:13:57.342246 | orchestrator | 2026-03-28 00:13:57.342254 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-28 00:13:57.401075 | orchestrator | ok: [testbed-manager] 2026-03-28 00:13:57.401280 | orchestrator | 2026-03-28 00:13:57.401303 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-28 00:13:57.453938 | orchestrator | ok: [testbed-manager] 2026-03-28 00:13:57.453987 | orchestrator | 2026-03-28 00:13:57.453995 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-28 00:13:57.527216 | orchestrator | ok: [testbed-manager] 2026-03-28 00:13:57.527269 | orchestrator | 2026-03-28 00:13:57.527278 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-28 00:13:57.589802 | orchestrator | ok: [testbed-manager] 2026-03-28 00:13:57.589857 | orchestrator | 2026-03-28 00:13:57.589866 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-28 00:13:57.631466 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-28 00:13:57.631515 | orchestrator | 2026-03-28 00:13:57.631521 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-28 00:13:58.359489 | orchestrator | ok: [testbed-manager] 2026-03-28 00:13:58.359544 | orchestrator | 2026-03-28 00:13:58.359554 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-28 00:13:58.411031 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:13:58.411079 | orchestrator | 2026-03-28 00:13:58.411085 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-28 00:13:59.868029 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:59.868091 | orchestrator | 2026-03-28 00:13:59.868101 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-28 00:14:00.440664 | orchestrator | ok: [testbed-manager] 2026-03-28 00:14:00.440720 | orchestrator | 2026-03-28 00:14:00.440728 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-28 00:14:01.633482 | orchestrator | changed: [testbed-manager] 2026-03-28 00:14:01.633586 | orchestrator | 2026-03-28 00:14:01.633603 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-28 00:14:18.016274 | orchestrator | changed: [testbed-manager] 2026-03-28 00:14:18.016366 | orchestrator | 2026-03-28 00:14:18.016385 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-28 00:14:18.693747 | orchestrator | ok: [testbed-manager] 2026-03-28 00:14:18.693831 | orchestrator | 2026-03-28 00:14:18.693849 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-28 00:14:18.753285 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:14:18.753392 | orchestrator | 2026-03-28 00:14:18.753412 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-28 00:14:19.745997 | orchestrator | changed: [testbed-manager] 2026-03-28 00:14:19.746127 | orchestrator | 2026-03-28 00:14:19.746138 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-28 00:14:20.761544 | orchestrator | changed: [testbed-manager] 2026-03-28 00:14:20.761588 | orchestrator | 2026-03-28 00:14:20.761597 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-28 00:14:21.356622 | orchestrator | changed: [testbed-manager] 2026-03-28 00:14:21.356702 | orchestrator | 2026-03-28 00:14:21.356714 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-28 00:14:21.393829 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-28 00:14:21.393974 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-28 00:14:21.393991 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-28 00:14:21.394003 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-28 00:14:23.544772 | orchestrator | changed: [testbed-manager] 2026-03-28 00:14:23.544871 | orchestrator | 2026-03-28 00:14:23.544889 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-28 00:14:32.684625 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-28 00:14:32.684675 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-28 00:14:32.684686 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-28 00:14:32.684693 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-28 00:14:32.684703 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-28 00:14:32.684710 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-28 00:14:32.684716 | orchestrator | 2026-03-28 00:14:32.684724 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-28 00:14:33.764264 | orchestrator | changed: [testbed-manager] 2026-03-28 00:14:33.764354 | orchestrator | 2026-03-28 00:14:33.764370 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-28 00:14:37.002184 | orchestrator | changed: [testbed-manager] 2026-03-28 00:14:37.002270 | orchestrator | 2026-03-28 00:14:37.002285 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-28 00:14:37.046765 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:14:37.046852 | orchestrator | 2026-03-28 00:14:37.046872 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-28 00:16:22.437772 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:22.437958 | orchestrator | 2026-03-28 00:16:22.437977 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-28 00:16:23.626287 | orchestrator | ok: [testbed-manager] 2026-03-28 00:16:23.626358 | orchestrator | 2026-03-28 00:16:23.626376 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:16:23.626392 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-03-28 00:16:23.626406 | orchestrator | 2026-03-28 00:16:23.960689 | orchestrator | ok: Runtime: 0:02:30.127926 2026-03-28 00:16:23.979746 | 2026-03-28 00:16:23.979923 | TASK [Reboot manager] 2026-03-28 00:16:25.518578 | orchestrator | ok: Runtime: 0:00:00.977702 2026-03-28 00:16:25.537073 | 2026-03-28 00:16:25.537257 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-28 00:16:41.970760 | orchestrator | ok 2026-03-28 00:16:41.978196 | 2026-03-28 00:16:41.978326 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-28 00:17:42.023079 | orchestrator | ok 2026-03-28 00:17:42.032372 | 2026-03-28 00:17:42.032503 | TASK [Deploy manager + bootstrap nodes] 2026-03-28 00:17:44.853849 | orchestrator | 2026-03-28 00:17:44.854175 | orchestrator | # DEPLOY MANAGER 2026-03-28 00:17:44.854216 | orchestrator | 2026-03-28 00:17:44.854241 | orchestrator | + set -e 2026-03-28 00:17:44.854263 | orchestrator | + echo 2026-03-28 00:17:44.854285 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-28 00:17:44.854313 | orchestrator | + echo 2026-03-28 00:17:44.854382 | orchestrator | + cat /opt/manager-vars.sh 2026-03-28 00:17:44.855963 | orchestrator | export NUMBER_OF_NODES=6 2026-03-28 00:17:44.855997 | orchestrator | 2026-03-28 00:17:44.856010 | orchestrator | export CEPH_VERSION=reef 2026-03-28 00:17:44.856023 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-28 00:17:44.856036 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-28 00:17:44.856059 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-28 00:17:44.856071 | orchestrator | 2026-03-28 00:17:44.856089 | orchestrator | export ARA=false 2026-03-28 00:17:44.856101 | orchestrator | export DEPLOY_MODE=manager 2026-03-28 00:17:44.856118 | orchestrator | export TEMPEST=true 2026-03-28 00:17:44.856130 | orchestrator | export IS_ZUUL=true 2026-03-28 00:17:44.856141 | orchestrator | 2026-03-28 00:17:44.856159 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.253 2026-03-28 00:17:44.856171 | orchestrator | export EXTERNAL_API=false 2026-03-28 00:17:44.856182 | orchestrator | 2026-03-28 00:17:44.856193 | orchestrator | export IMAGE_USER=ubuntu 2026-03-28 00:17:44.856208 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-28 00:17:44.856219 | orchestrator | 2026-03-28 00:17:44.856230 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-28 00:17:44.856241 | orchestrator | 2026-03-28 00:17:44.856252 | orchestrator | + echo 2026-03-28 00:17:44.856265 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 00:17:44.856956 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 00:17:44.856997 | orchestrator | ++ INTERACTIVE=false 2026-03-28 00:17:44.857014 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 00:17:44.857054 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 00:17:44.857075 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 00:17:44.857087 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 00:17:44.857100 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 00:17:44.857112 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 00:17:44.857123 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 00:17:44.857136 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 00:17:44.857148 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 00:17:44.857160 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 00:17:44.857172 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 00:17:44.857184 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 00:17:44.857207 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 00:17:44.857220 | orchestrator | ++ export ARA=false 2026-03-28 00:17:44.857231 | orchestrator | ++ ARA=false 2026-03-28 00:17:44.857245 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 00:17:44.857256 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 00:17:44.857269 | orchestrator | ++ export TEMPEST=true 2026-03-28 00:17:44.857282 | orchestrator | ++ TEMPEST=true 2026-03-28 00:17:44.857295 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 00:17:44.857306 | orchestrator | ++ IS_ZUUL=true 2026-03-28 00:17:44.857321 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.253 2026-03-28 00:17:44.857333 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.253 2026-03-28 00:17:44.857343 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 00:17:44.857354 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 00:17:44.857365 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 00:17:44.857376 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 00:17:44.857387 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 00:17:44.857398 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 00:17:44.857409 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 00:17:44.857420 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 00:17:44.857431 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-28 00:17:44.921055 | orchestrator | + docker version 2026-03-28 00:17:45.049212 | orchestrator | Client: Docker Engine - Community 2026-03-28 00:17:45.049313 | orchestrator | Version: 27.5.1 2026-03-28 00:17:45.049326 | orchestrator | API version: 1.47 2026-03-28 00:17:45.049339 | orchestrator | Go version: go1.22.11 2026-03-28 00:17:45.049349 | orchestrator | Git commit: 9f9e405 2026-03-28 00:17:45.049359 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-28 00:17:45.049370 | orchestrator | OS/Arch: linux/amd64 2026-03-28 00:17:45.049379 | orchestrator | Context: default 2026-03-28 00:17:45.049389 | orchestrator | 2026-03-28 00:17:45.049400 | orchestrator | Server: Docker Engine - Community 2026-03-28 00:17:45.049410 | orchestrator | Engine: 2026-03-28 00:17:45.049420 | orchestrator | Version: 27.5.1 2026-03-28 00:17:45.049430 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-28 00:17:45.049469 | orchestrator | Go version: go1.22.11 2026-03-28 00:17:45.049480 | orchestrator | Git commit: 4c9b3b0 2026-03-28 00:17:45.049490 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-28 00:17:45.049499 | orchestrator | OS/Arch: linux/amd64 2026-03-28 00:17:45.049509 | orchestrator | Experimental: false 2026-03-28 00:17:45.049519 | orchestrator | containerd: 2026-03-28 00:17:45.049528 | orchestrator | Version: v2.2.2 2026-03-28 00:17:45.049539 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-28 00:17:45.049549 | orchestrator | runc: 2026-03-28 00:17:45.049559 | orchestrator | Version: 1.3.4 2026-03-28 00:17:45.049569 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-28 00:17:45.049578 | orchestrator | docker-init: 2026-03-28 00:17:45.049588 | orchestrator | Version: 0.19.0 2026-03-28 00:17:45.049599 | orchestrator | GitCommit: de40ad0 2026-03-28 00:17:45.054693 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-28 00:17:45.065771 | orchestrator | + set -e 2026-03-28 00:17:45.065814 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 00:17:45.065825 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 00:17:45.065837 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 00:17:45.065847 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 00:17:45.065857 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 00:17:45.065868 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 00:17:45.065886 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 00:17:45.065901 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 00:17:45.065918 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 00:17:45.065935 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 00:17:45.065950 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 00:17:45.065960 | orchestrator | ++ export ARA=false 2026-03-28 00:17:45.065970 | orchestrator | ++ ARA=false 2026-03-28 00:17:45.065980 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 00:17:45.065990 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 00:17:45.066007 | orchestrator | ++ export TEMPEST=true 2026-03-28 00:17:45.066092 | orchestrator | ++ TEMPEST=true 2026-03-28 00:17:45.066111 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 00:17:45.066129 | orchestrator | ++ IS_ZUUL=true 2026-03-28 00:17:45.066146 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.253 2026-03-28 00:17:45.066165 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.253 2026-03-28 00:17:45.066182 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 00:17:45.066199 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 00:17:45.066215 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 00:17:45.066233 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 00:17:45.066251 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 00:17:45.066268 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 00:17:45.066286 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 00:17:45.066304 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 00:17:45.066322 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 00:17:45.066339 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 00:17:45.066350 | orchestrator | ++ INTERACTIVE=false 2026-03-28 00:17:45.066359 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 00:17:45.066379 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 00:17:45.066395 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-28 00:17:45.066411 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-28 00:17:45.073672 | orchestrator | + set -e 2026-03-28 00:17:45.073777 | orchestrator | + VERSION=9.5.0 2026-03-28 00:17:45.073793 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-28 00:17:45.085111 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-28 00:17:45.085177 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-28 00:17:45.090244 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-28 00:17:45.096252 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-28 00:17:45.105940 | orchestrator | /opt/configuration ~ 2026-03-28 00:17:45.105999 | orchestrator | + set -e 2026-03-28 00:17:45.106008 | orchestrator | + pushd /opt/configuration 2026-03-28 00:17:45.106069 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 00:17:45.107517 | orchestrator | + source /opt/venv/bin/activate 2026-03-28 00:17:45.108897 | orchestrator | ++ deactivate nondestructive 2026-03-28 00:17:45.108937 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:17:45.108947 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:17:45.108981 | orchestrator | ++ hash -r 2026-03-28 00:17:45.108996 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:17:45.109003 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-28 00:17:45.109010 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-28 00:17:45.109018 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-28 00:17:45.109026 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-28 00:17:45.109033 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-28 00:17:45.109094 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-28 00:17:45.109109 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-28 00:17:45.109123 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 00:17:45.109136 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 00:17:45.109148 | orchestrator | ++ export PATH 2026-03-28 00:17:45.109166 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:17:45.109179 | orchestrator | ++ '[' -z '' ']' 2026-03-28 00:17:45.109190 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-28 00:17:45.109202 | orchestrator | ++ PS1='(venv) ' 2026-03-28 00:17:45.109214 | orchestrator | ++ export PS1 2026-03-28 00:17:45.109226 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-28 00:17:45.109239 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-28 00:17:45.109358 | orchestrator | ++ hash -r 2026-03-28 00:17:45.109619 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-28 00:17:46.298175 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-28 00:17:46.299007 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.0) 2026-03-28 00:17:46.300390 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-28 00:17:46.301986 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-28 00:17:46.303095 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-28 00:17:46.313742 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-28 00:17:46.314926 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-28 00:17:46.316009 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-28 00:17:46.317449 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-28 00:17:46.356388 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-28 00:17:46.357985 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-28 00:17:46.359457 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-28 00:17:46.360739 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-28 00:17:46.364905 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-28 00:17:46.590005 | orchestrator | ++ which gilt 2026-03-28 00:17:46.593454 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-28 00:17:46.593510 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-28 00:17:46.855423 | orchestrator | osism.cfg-generics: 2026-03-28 00:17:47.021633 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-28 00:17:47.021776 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-28 00:17:47.022205 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-28 00:17:47.022225 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-28 00:17:47.969083 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-28 00:17:47.980178 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-28 00:17:48.348416 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-28 00:17:48.423117 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 00:17:48.423229 | orchestrator | + deactivate 2026-03-28 00:17:48.423247 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-28 00:17:48.423261 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 00:17:48.423272 | orchestrator | + export PATH 2026-03-28 00:17:48.423284 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-28 00:17:48.423295 | orchestrator | + '[' -n '' ']' 2026-03-28 00:17:48.423309 | orchestrator | + hash -r 2026-03-28 00:17:48.423334 | orchestrator | ~ 2026-03-28 00:17:48.423346 | orchestrator | + '[' -n '' ']' 2026-03-28 00:17:48.423357 | orchestrator | + unset VIRTUAL_ENV 2026-03-28 00:17:48.423368 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-28 00:17:48.423379 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-28 00:17:48.423390 | orchestrator | + unset -f deactivate 2026-03-28 00:17:48.423401 | orchestrator | + popd 2026-03-28 00:17:48.425772 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-28 00:17:48.425798 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-28 00:17:48.426942 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-28 00:17:48.503403 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 00:17:48.503507 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-28 00:17:48.504481 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-28 00:17:48.576313 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:17:48.576550 | orchestrator | ++ semver 2024.2 2025.1 2026-03-28 00:17:48.639423 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:17:48.639519 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-28 00:17:48.725862 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 00:17:48.725978 | orchestrator | + source /opt/venv/bin/activate 2026-03-28 00:17:48.725995 | orchestrator | ++ deactivate nondestructive 2026-03-28 00:17:48.726007 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:17:48.726074 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:17:48.726086 | orchestrator | ++ hash -r 2026-03-28 00:17:48.726098 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:17:48.726109 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-28 00:17:48.726120 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-28 00:17:48.726132 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-28 00:17:48.726145 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-28 00:17:48.726156 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-28 00:17:48.726180 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-28 00:17:48.726193 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-28 00:17:48.726205 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 00:17:48.726242 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 00:17:48.726255 | orchestrator | ++ export PATH 2026-03-28 00:17:48.726266 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:17:48.726277 | orchestrator | ++ '[' -z '' ']' 2026-03-28 00:17:48.726288 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-28 00:17:48.726299 | orchestrator | ++ PS1='(venv) ' 2026-03-28 00:17:48.726310 | orchestrator | ++ export PS1 2026-03-28 00:17:48.726322 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-28 00:17:48.726333 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-28 00:17:48.726344 | orchestrator | ++ hash -r 2026-03-28 00:17:48.726360 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-28 00:17:50.047547 | orchestrator | 2026-03-28 00:17:50.047658 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-28 00:17:50.047674 | orchestrator | 2026-03-28 00:17:50.047687 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-28 00:17:50.626418 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:50.626540 | orchestrator | 2026-03-28 00:17:50.626567 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-28 00:17:51.656448 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:51.656551 | orchestrator | 2026-03-28 00:17:51.656568 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-28 00:17:51.656608 | orchestrator | 2026-03-28 00:17:51.656621 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:17:53.992246 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:53.992340 | orchestrator | 2026-03-28 00:17:53.992359 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-28 00:17:54.050060 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:54.050143 | orchestrator | 2026-03-28 00:17:54.050160 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-28 00:17:54.462206 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:54.462283 | orchestrator | 2026-03-28 00:17:54.462299 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-28 00:17:54.490537 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:17:54.490662 | orchestrator | 2026-03-28 00:17:54.490732 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-28 00:17:54.802539 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:54.802610 | orchestrator | 2026-03-28 00:17:54.802625 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-28 00:17:55.115181 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:55.115288 | orchestrator | 2026-03-28 00:17:55.115316 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-28 00:17:55.207830 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:17:55.207913 | orchestrator | 2026-03-28 00:17:55.207928 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-28 00:17:55.207941 | orchestrator | 2026-03-28 00:17:55.207952 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:17:56.895242 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:56.895330 | orchestrator | 2026-03-28 00:17:56.895347 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-28 00:17:56.983141 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-28 00:17:56.983229 | orchestrator | 2026-03-28 00:17:56.983246 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-28 00:17:57.047640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-28 00:17:57.047787 | orchestrator | 2026-03-28 00:17:57.047813 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-28 00:17:58.148297 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-28 00:17:58.148399 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-28 00:17:58.148413 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-28 00:17:58.148426 | orchestrator | 2026-03-28 00:17:58.148441 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-28 00:17:59.950230 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-28 00:17:59.950356 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-28 00:17:59.950380 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-28 00:17:59.950401 | orchestrator | 2026-03-28 00:17:59.950422 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-28 00:18:00.589058 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:18:00.589158 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:00.589181 | orchestrator | 2026-03-28 00:18:00.589201 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-28 00:18:01.236200 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:18:01.236296 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:01.236311 | orchestrator | 2026-03-28 00:18:01.236323 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-28 00:18:01.298291 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:18:01.298408 | orchestrator | 2026-03-28 00:18:01.298425 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-28 00:18:01.664737 | orchestrator | ok: [testbed-manager] 2026-03-28 00:18:01.664850 | orchestrator | 2026-03-28 00:18:01.664867 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-28 00:18:01.744376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-28 00:18:01.744467 | orchestrator | 2026-03-28 00:18:01.744482 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-28 00:18:02.882545 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:02.882644 | orchestrator | 2026-03-28 00:18:02.882660 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-28 00:18:03.754274 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:03.754430 | orchestrator | 2026-03-28 00:18:03.754462 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-28 00:18:14.664465 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:14.664568 | orchestrator | 2026-03-28 00:18:14.664584 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-28 00:18:14.714234 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:18:14.714311 | orchestrator | 2026-03-28 00:18:14.714342 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-28 00:18:14.714355 | orchestrator | 2026-03-28 00:18:14.714367 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:18:16.533789 | orchestrator | ok: [testbed-manager] 2026-03-28 00:18:16.533866 | orchestrator | 2026-03-28 00:18:16.533875 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-28 00:18:16.652631 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-28 00:18:16.652774 | orchestrator | 2026-03-28 00:18:16.652791 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-28 00:18:16.709913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:18:16.710001 | orchestrator | 2026-03-28 00:18:16.710015 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-28 00:18:19.235271 | orchestrator | ok: [testbed-manager] 2026-03-28 00:18:19.235377 | orchestrator | 2026-03-28 00:18:19.235394 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-28 00:18:19.271543 | orchestrator | ok: [testbed-manager] 2026-03-28 00:18:19.271635 | orchestrator | 2026-03-28 00:18:19.271695 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-28 00:18:19.399224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-28 00:18:19.399360 | orchestrator | 2026-03-28 00:18:19.399391 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-28 00:18:22.298944 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-28 00:18:22.299074 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-28 00:18:22.299090 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-28 00:18:22.299103 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-28 00:18:22.299114 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-28 00:18:22.299126 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-28 00:18:22.299137 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-28 00:18:22.299148 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-28 00:18:22.299160 | orchestrator | 2026-03-28 00:18:22.299172 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-28 00:18:22.962983 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:22.963076 | orchestrator | 2026-03-28 00:18:22.963093 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-28 00:18:23.612436 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:23.612535 | orchestrator | 2026-03-28 00:18:23.612552 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-28 00:18:23.691439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-28 00:18:23.691534 | orchestrator | 2026-03-28 00:18:23.691550 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-28 00:18:24.917246 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-28 00:18:24.917347 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-28 00:18:24.917363 | orchestrator | 2026-03-28 00:18:24.917376 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-28 00:18:25.570307 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:25.570407 | orchestrator | 2026-03-28 00:18:25.570423 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-28 00:18:25.621931 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:18:25.622173 | orchestrator | 2026-03-28 00:18:25.622205 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-28 00:18:25.704280 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-28 00:18:25.704391 | orchestrator | 2026-03-28 00:18:25.704419 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-28 00:18:26.340370 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:26.340490 | orchestrator | 2026-03-28 00:18:26.340507 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-28 00:18:26.408832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-28 00:18:26.408936 | orchestrator | 2026-03-28 00:18:26.408952 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-28 00:18:27.803126 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:18:27.803233 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:18:27.803248 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:27.803261 | orchestrator | 2026-03-28 00:18:27.803273 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-28 00:18:28.417401 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:28.417492 | orchestrator | 2026-03-28 00:18:28.417507 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-28 00:18:28.468615 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:18:28.468745 | orchestrator | 2026-03-28 00:18:28.468762 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-28 00:18:28.560356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-28 00:18:28.560434 | orchestrator | 2026-03-28 00:18:28.560445 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-28 00:18:29.104477 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:29.104570 | orchestrator | 2026-03-28 00:18:29.104587 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-28 00:18:29.521307 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:29.521401 | orchestrator | 2026-03-28 00:18:29.521414 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-28 00:18:30.768846 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-28 00:18:30.768950 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-28 00:18:30.768966 | orchestrator | 2026-03-28 00:18:30.768980 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-28 00:18:31.417300 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:31.417433 | orchestrator | 2026-03-28 00:18:31.417465 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-28 00:18:31.812713 | orchestrator | ok: [testbed-manager] 2026-03-28 00:18:31.812811 | orchestrator | 2026-03-28 00:18:31.812827 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-28 00:18:32.195249 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:32.195375 | orchestrator | 2026-03-28 00:18:32.195402 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-28 00:18:32.247129 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:18:32.247206 | orchestrator | 2026-03-28 00:18:32.247215 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-28 00:18:32.321723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-28 00:18:32.321845 | orchestrator | 2026-03-28 00:18:32.321862 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-28 00:18:32.370229 | orchestrator | ok: [testbed-manager] 2026-03-28 00:18:32.370320 | orchestrator | 2026-03-28 00:18:32.370334 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-28 00:18:34.472802 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-28 00:18:34.472929 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-28 00:18:34.472955 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-28 00:18:34.472976 | orchestrator | 2026-03-28 00:18:34.472996 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-28 00:18:35.223270 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:35.223396 | orchestrator | 2026-03-28 00:18:35.223419 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-28 00:18:35.964293 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:35.964379 | orchestrator | 2026-03-28 00:18:35.964391 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-28 00:18:36.681178 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:36.681266 | orchestrator | 2026-03-28 00:18:36.681275 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-28 00:18:36.747224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-28 00:18:36.747317 | orchestrator | 2026-03-28 00:18:36.747331 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-28 00:18:36.802716 | orchestrator | ok: [testbed-manager] 2026-03-28 00:18:36.802819 | orchestrator | 2026-03-28 00:18:36.802833 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-28 00:18:37.577391 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-28 00:18:37.577482 | orchestrator | 2026-03-28 00:18:37.577492 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-28 00:18:37.678093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-28 00:18:37.678181 | orchestrator | 2026-03-28 00:18:37.678194 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-28 00:18:38.395110 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:38.395218 | orchestrator | 2026-03-28 00:18:38.395246 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-28 00:18:39.047584 | orchestrator | ok: [testbed-manager] 2026-03-28 00:18:39.047699 | orchestrator | 2026-03-28 00:18:39.047709 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-28 00:18:39.109403 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:18:39.109472 | orchestrator | 2026-03-28 00:18:39.109480 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-28 00:18:39.171080 | orchestrator | ok: [testbed-manager] 2026-03-28 00:18:39.171170 | orchestrator | 2026-03-28 00:18:39.171184 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-28 00:18:40.011118 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:40.011247 | orchestrator | 2026-03-28 00:18:40.011274 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-28 00:19:52.949966 | orchestrator | changed: [testbed-manager] 2026-03-28 00:19:52.950081 | orchestrator | 2026-03-28 00:19:52.950093 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-28 00:19:54.965950 | orchestrator | ok: [testbed-manager] 2026-03-28 00:19:54.966142 | orchestrator | 2026-03-28 00:19:54.966162 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-28 00:19:55.014949 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:19:55.015071 | orchestrator | 2026-03-28 00:19:55.015096 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-28 00:19:57.688945 | orchestrator | changed: [testbed-manager] 2026-03-28 00:19:57.689048 | orchestrator | 2026-03-28 00:19:57.689064 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-28 00:19:57.750997 | orchestrator | ok: [testbed-manager] 2026-03-28 00:19:57.751092 | orchestrator | 2026-03-28 00:19:57.751107 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-28 00:19:57.751120 | orchestrator | 2026-03-28 00:19:57.751131 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-28 00:19:57.907999 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:19:57.908076 | orchestrator | 2026-03-28 00:19:57.908084 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-28 00:20:57.968397 | orchestrator | Pausing for 60 seconds 2026-03-28 00:20:57.968512 | orchestrator | changed: [testbed-manager] 2026-03-28 00:20:57.968528 | orchestrator | 2026-03-28 00:20:57.968538 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-28 00:21:01.022284 | orchestrator | changed: [testbed-manager] 2026-03-28 00:21:01.022373 | orchestrator | 2026-03-28 00:21:01.022386 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-28 00:22:03.114274 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-28 00:22:03.114436 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-28 00:22:03.114476 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-28 00:22:03.115485 | orchestrator | changed: [testbed-manager] 2026-03-28 00:22:03.115537 | orchestrator | 2026-03-28 00:22:03.115558 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-28 00:22:13.623188 | orchestrator | changed: [testbed-manager] 2026-03-28 00:22:13.623291 | orchestrator | 2026-03-28 00:22:13.623303 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-28 00:22:13.708223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-28 00:22:13.708312 | orchestrator | 2026-03-28 00:22:13.708325 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-28 00:22:13.708336 | orchestrator | 2026-03-28 00:22:13.708345 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-28 00:22:13.757984 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:22:13.758176 | orchestrator | 2026-03-28 00:22:13.758213 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-28 00:22:13.824346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-28 00:22:13.824520 | orchestrator | 2026-03-28 00:22:13.824547 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-28 00:22:14.684558 | orchestrator | changed: [testbed-manager] 2026-03-28 00:22:14.684684 | orchestrator | 2026-03-28 00:22:14.684711 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-28 00:22:17.911552 | orchestrator | ok: [testbed-manager] 2026-03-28 00:22:17.911661 | orchestrator | 2026-03-28 00:22:17.911678 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-28 00:22:17.987789 | orchestrator | ok: [testbed-manager] => { 2026-03-28 00:22:17.987893 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-28 00:22:17.987910 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-28 00:22:17.987924 | orchestrator | "Checking running containers against expected versions...", 2026-03-28 00:22:17.987937 | orchestrator | "", 2026-03-28 00:22:17.987948 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-28 00:22:17.987960 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-28 00:22:17.987972 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.987983 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-28 00:22:17.987995 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988006 | orchestrator | "", 2026-03-28 00:22:17.988017 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-28 00:22:17.988055 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-28 00:22:17.988067 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.988079 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-28 00:22:17.988090 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988101 | orchestrator | "", 2026-03-28 00:22:17.988112 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-28 00:22:17.988123 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-28 00:22:17.988134 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.988145 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-28 00:22:17.988156 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988167 | orchestrator | "", 2026-03-28 00:22:17.988178 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-28 00:22:17.988190 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-28 00:22:17.988201 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.988211 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-28 00:22:17.988222 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988233 | orchestrator | "", 2026-03-28 00:22:17.988247 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-28 00:22:17.988258 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-28 00:22:17.988268 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.988280 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-28 00:22:17.988290 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988301 | orchestrator | "", 2026-03-28 00:22:17.988313 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-28 00:22:17.988326 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 00:22:17.988338 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.988350 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 00:22:17.988363 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988402 | orchestrator | "", 2026-03-28 00:22:17.988422 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-28 00:22:17.988435 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-28 00:22:17.988448 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.988461 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-28 00:22:17.988474 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988486 | orchestrator | "", 2026-03-28 00:22:17.988498 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-28 00:22:17.988511 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-28 00:22:17.988523 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.988536 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-28 00:22:17.988548 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988561 | orchestrator | "", 2026-03-28 00:22:17.988574 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-28 00:22:17.988586 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-28 00:22:17.988599 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.988611 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-28 00:22:17.988623 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988637 | orchestrator | "", 2026-03-28 00:22:17.988649 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-28 00:22:17.988662 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-28 00:22:17.988675 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.988688 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-28 00:22:17.988699 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988709 | orchestrator | "", 2026-03-28 00:22:17.988720 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-28 00:22:17.988732 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 00:22:17.988749 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.988760 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 00:22:17.988771 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988782 | orchestrator | "", 2026-03-28 00:22:17.988793 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-28 00:22:17.988804 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 00:22:17.988815 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.988826 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 00:22:17.988836 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988848 | orchestrator | "", 2026-03-28 00:22:17.988859 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-28 00:22:17.988870 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 00:22:17.988881 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.988892 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 00:22:17.988903 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988914 | orchestrator | "", 2026-03-28 00:22:17.988925 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-28 00:22:17.988936 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 00:22:17.988946 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.988957 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 00:22:17.988986 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.988997 | orchestrator | "", 2026-03-28 00:22:17.989009 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-28 00:22:17.989020 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 00:22:17.989039 | orchestrator | " Enabled: true", 2026-03-28 00:22:17.989066 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 00:22:17.989089 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:22:17.989100 | orchestrator | "", 2026-03-28 00:22:17.989112 | orchestrator | "=== Summary ===", 2026-03-28 00:22:17.989123 | orchestrator | "Errors (version mismatches): 0", 2026-03-28 00:22:17.989134 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-28 00:22:17.989145 | orchestrator | "", 2026-03-28 00:22:17.989156 | orchestrator | "✅ All running containers match expected versions!" 2026-03-28 00:22:17.989167 | orchestrator | ] 2026-03-28 00:22:17.989179 | orchestrator | } 2026-03-28 00:22:17.989190 | orchestrator | 2026-03-28 00:22:17.989202 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-28 00:22:18.044768 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:22:18.044853 | orchestrator | 2026-03-28 00:22:18.044867 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:22:18.044878 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-28 00:22:18.044889 | orchestrator | 2026-03-28 00:22:18.151351 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 00:22:18.151464 | orchestrator | + deactivate 2026-03-28 00:22:18.151475 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-28 00:22:18.151484 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 00:22:18.151491 | orchestrator | + export PATH 2026-03-28 00:22:18.151498 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-28 00:22:18.151506 | orchestrator | + '[' -n '' ']' 2026-03-28 00:22:18.151512 | orchestrator | + hash -r 2026-03-28 00:22:18.151519 | orchestrator | + '[' -n '' ']' 2026-03-28 00:22:18.151526 | orchestrator | + unset VIRTUAL_ENV 2026-03-28 00:22:18.151532 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-28 00:22:18.151539 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-28 00:22:18.151546 | orchestrator | + unset -f deactivate 2026-03-28 00:22:18.151553 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-28 00:22:18.160852 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 00:22:18.160912 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-28 00:22:18.160951 | orchestrator | + local max_attempts=60 2026-03-28 00:22:18.160962 | orchestrator | + local name=ceph-ansible 2026-03-28 00:22:18.160972 | orchestrator | + local attempt_num=1 2026-03-28 00:22:18.161459 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:22:18.195928 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:22:18.196009 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-28 00:22:18.196023 | orchestrator | + local max_attempts=60 2026-03-28 00:22:18.196035 | orchestrator | + local name=kolla-ansible 2026-03-28 00:22:18.196046 | orchestrator | + local attempt_num=1 2026-03-28 00:22:18.196644 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-28 00:22:18.225350 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:22:18.225470 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-28 00:22:18.225483 | orchestrator | + local max_attempts=60 2026-03-28 00:22:18.225491 | orchestrator | + local name=osism-ansible 2026-03-28 00:22:18.225500 | orchestrator | + local attempt_num=1 2026-03-28 00:22:18.225848 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-28 00:22:18.255701 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:22:18.255797 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-28 00:22:18.255820 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-28 00:22:18.931051 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-28 00:22:19.125838 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-28 00:22:19.125917 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-28 00:22:19.125929 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-28 00:22:19.125937 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-28 00:22:19.125947 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-28 00:22:19.125973 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:22:19.125981 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:22:19.125988 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-28 00:22:19.125995 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:22:19.126003 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-28 00:22:19.126010 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:22:19.126057 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-28 00:22:19.126065 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-28 00:22:19.126092 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-28 00:22:19.126256 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-28 00:22:19.126271 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:22:19.131966 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-28 00:22:19.186912 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 00:22:19.187003 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-28 00:22:19.191307 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-28 00:22:31.387956 | orchestrator | 2026-03-28 00:22:31 | INFO  | Task e70b5a3b-9d1f-4f7f-9a41-121d79c5b80f (resolvconf) was prepared for execution. 2026-03-28 00:22:31.388052 | orchestrator | 2026-03-28 00:22:31 | INFO  | It takes a moment until task e70b5a3b-9d1f-4f7f-9a41-121d79c5b80f (resolvconf) has been started and output is visible here. 2026-03-28 00:22:45.383436 | orchestrator | 2026-03-28 00:22:45.383548 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-28 00:22:45.383564 | orchestrator | 2026-03-28 00:22:45.383577 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:22:45.383589 | orchestrator | Saturday 28 March 2026 00:22:35 +0000 (0:00:00.138) 0:00:00.138 ******** 2026-03-28 00:22:45.383601 | orchestrator | ok: [testbed-manager] 2026-03-28 00:22:45.383613 | orchestrator | 2026-03-28 00:22:45.383624 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-28 00:22:45.383636 | orchestrator | Saturday 28 March 2026 00:22:39 +0000 (0:00:03.743) 0:00:03.882 ******** 2026-03-28 00:22:45.383647 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:22:45.383673 | orchestrator | 2026-03-28 00:22:45.383685 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-28 00:22:45.383707 | orchestrator | Saturday 28 March 2026 00:22:39 +0000 (0:00:00.070) 0:00:03.953 ******** 2026-03-28 00:22:45.383718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-28 00:22:45.383730 | orchestrator | 2026-03-28 00:22:45.383741 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-28 00:22:45.383752 | orchestrator | Saturday 28 March 2026 00:22:39 +0000 (0:00:00.076) 0:00:04.029 ******** 2026-03-28 00:22:45.383780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:22:45.383792 | orchestrator | 2026-03-28 00:22:45.383803 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-28 00:22:45.383814 | orchestrator | Saturday 28 March 2026 00:22:39 +0000 (0:00:00.078) 0:00:04.108 ******** 2026-03-28 00:22:45.383824 | orchestrator | ok: [testbed-manager] 2026-03-28 00:22:45.383849 | orchestrator | 2026-03-28 00:22:45.383860 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-28 00:22:45.383882 | orchestrator | Saturday 28 March 2026 00:22:40 +0000 (0:00:01.153) 0:00:05.261 ******** 2026-03-28 00:22:45.383893 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:22:45.383904 | orchestrator | 2026-03-28 00:22:45.383915 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-28 00:22:45.383926 | orchestrator | Saturday 28 March 2026 00:22:40 +0000 (0:00:00.054) 0:00:05.315 ******** 2026-03-28 00:22:45.383961 | orchestrator | ok: [testbed-manager] 2026-03-28 00:22:45.383973 | orchestrator | 2026-03-28 00:22:45.383984 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-28 00:22:45.383995 | orchestrator | Saturday 28 March 2026 00:22:41 +0000 (0:00:00.513) 0:00:05.829 ******** 2026-03-28 00:22:45.384006 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:22:45.384017 | orchestrator | 2026-03-28 00:22:45.384028 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-28 00:22:45.384041 | orchestrator | Saturday 28 March 2026 00:22:41 +0000 (0:00:00.083) 0:00:05.912 ******** 2026-03-28 00:22:45.384052 | orchestrator | changed: [testbed-manager] 2026-03-28 00:22:45.384063 | orchestrator | 2026-03-28 00:22:45.384074 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-28 00:22:45.384084 | orchestrator | Saturday 28 March 2026 00:22:41 +0000 (0:00:00.564) 0:00:06.476 ******** 2026-03-28 00:22:45.384095 | orchestrator | changed: [testbed-manager] 2026-03-28 00:22:45.384106 | orchestrator | 2026-03-28 00:22:45.384117 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-28 00:22:45.384128 | orchestrator | Saturday 28 March 2026 00:22:42 +0000 (0:00:01.109) 0:00:07.586 ******** 2026-03-28 00:22:45.384140 | orchestrator | ok: [testbed-manager] 2026-03-28 00:22:45.384151 | orchestrator | 2026-03-28 00:22:45.384162 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-28 00:22:45.384173 | orchestrator | Saturday 28 March 2026 00:22:43 +0000 (0:00:00.989) 0:00:08.575 ******** 2026-03-28 00:22:45.384184 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-28 00:22:45.384195 | orchestrator | 2026-03-28 00:22:45.384206 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-28 00:22:45.384216 | orchestrator | Saturday 28 March 2026 00:22:43 +0000 (0:00:00.085) 0:00:08.661 ******** 2026-03-28 00:22:45.384227 | orchestrator | changed: [testbed-manager] 2026-03-28 00:22:45.384238 | orchestrator | 2026-03-28 00:22:45.384249 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:22:45.384261 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 00:22:45.384272 | orchestrator | 2026-03-28 00:22:45.384282 | orchestrator | 2026-03-28 00:22:45.384293 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:22:45.384304 | orchestrator | Saturday 28 March 2026 00:22:45 +0000 (0:00:01.155) 0:00:09.816 ******** 2026-03-28 00:22:45.384315 | orchestrator | =============================================================================== 2026-03-28 00:22:45.384326 | orchestrator | Gathering Facts --------------------------------------------------------- 3.74s 2026-03-28 00:22:45.384336 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2026-03-28 00:22:45.384347 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.15s 2026-03-28 00:22:45.384379 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2026-03-28 00:22:45.384391 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2026-03-28 00:22:45.384402 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2026-03-28 00:22:45.384428 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2026-03-28 00:22:45.384440 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-03-28 00:22:45.384451 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-28 00:22:45.384462 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-03-28 00:22:45.384473 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-28 00:22:45.384484 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-28 00:22:45.384503 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-03-28 00:22:45.691455 | orchestrator | + osism apply sshconfig 2026-03-28 00:22:57.804697 | orchestrator | 2026-03-28 00:22:57 | INFO  | Task b1045dbc-34e9-40e3-a7ae-15006df61a0a (sshconfig) was prepared for execution. 2026-03-28 00:22:57.804825 | orchestrator | 2026-03-28 00:22:57 | INFO  | It takes a moment until task b1045dbc-34e9-40e3-a7ae-15006df61a0a (sshconfig) has been started and output is visible here. 2026-03-28 00:23:09.628633 | orchestrator | 2026-03-28 00:23:09.628748 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-28 00:23:09.628766 | orchestrator | 2026-03-28 00:23:09.628797 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-28 00:23:09.628810 | orchestrator | Saturday 28 March 2026 00:23:02 +0000 (0:00:00.154) 0:00:00.154 ******** 2026-03-28 00:23:09.628821 | orchestrator | ok: [testbed-manager] 2026-03-28 00:23:09.628833 | orchestrator | 2026-03-28 00:23:09.628845 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-28 00:23:09.628856 | orchestrator | Saturday 28 March 2026 00:23:02 +0000 (0:00:00.549) 0:00:00.704 ******** 2026-03-28 00:23:09.628867 | orchestrator | changed: [testbed-manager] 2026-03-28 00:23:09.628879 | orchestrator | 2026-03-28 00:23:09.628890 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-28 00:23:09.628901 | orchestrator | Saturday 28 March 2026 00:23:03 +0000 (0:00:00.528) 0:00:01.232 ******** 2026-03-28 00:23:09.628911 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-28 00:23:09.628923 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-28 00:23:09.628934 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-28 00:23:09.628945 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-28 00:23:09.628956 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-28 00:23:09.628966 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-28 00:23:09.628977 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-28 00:23:09.628988 | orchestrator | 2026-03-28 00:23:09.628999 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-28 00:23:09.629010 | orchestrator | Saturday 28 March 2026 00:23:08 +0000 (0:00:05.667) 0:00:06.899 ******** 2026-03-28 00:23:09.629021 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:23:09.629032 | orchestrator | 2026-03-28 00:23:09.629043 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-28 00:23:09.629053 | orchestrator | Saturday 28 March 2026 00:23:08 +0000 (0:00:00.081) 0:00:06.981 ******** 2026-03-28 00:23:09.629064 | orchestrator | changed: [testbed-manager] 2026-03-28 00:23:09.629075 | orchestrator | 2026-03-28 00:23:09.629086 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:23:09.629098 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:23:09.629110 | orchestrator | 2026-03-28 00:23:09.629121 | orchestrator | 2026-03-28 00:23:09.629132 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:23:09.629143 | orchestrator | Saturday 28 March 2026 00:23:09 +0000 (0:00:00.561) 0:00:07.543 ******** 2026-03-28 00:23:09.629154 | orchestrator | =============================================================================== 2026-03-28 00:23:09.629164 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.67s 2026-03-28 00:23:09.629177 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2026-03-28 00:23:09.629190 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2026-03-28 00:23:09.629202 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-03-28 00:23:09.629215 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-03-28 00:23:09.907492 | orchestrator | + osism apply known-hosts 2026-03-28 00:23:22.066976 | orchestrator | 2026-03-28 00:23:22 | INFO  | Task 44951d14-47cd-4c75-a11b-0a350702fbbf (known-hosts) was prepared for execution. 2026-03-28 00:23:22.067104 | orchestrator | 2026-03-28 00:23:22 | INFO  | It takes a moment until task 44951d14-47cd-4c75-a11b-0a350702fbbf (known-hosts) has been started and output is visible here. 2026-03-28 00:23:38.722015 | orchestrator | 2026-03-28 00:23:38.722184 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-28 00:23:38.722201 | orchestrator | 2026-03-28 00:23:38.722213 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-28 00:23:38.722226 | orchestrator | Saturday 28 March 2026 00:23:26 +0000 (0:00:00.164) 0:00:00.164 ******** 2026-03-28 00:23:38.722237 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-28 00:23:38.722249 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-28 00:23:38.722260 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-28 00:23:38.722272 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-28 00:23:38.722283 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-28 00:23:38.722293 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-28 00:23:38.722304 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-28 00:23:38.722386 | orchestrator | 2026-03-28 00:23:38.722398 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-28 00:23:38.722411 | orchestrator | Saturday 28 March 2026 00:23:32 +0000 (0:00:05.977) 0:00:06.142 ******** 2026-03-28 00:23:38.722423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-28 00:23:38.722436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-28 00:23:38.722448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-28 00:23:38.722458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-28 00:23:38.722469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-28 00:23:38.722491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-28 00:23:38.722503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-28 00:23:38.722514 | orchestrator | 2026-03-28 00:23:38.722525 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:38.722536 | orchestrator | Saturday 28 March 2026 00:23:32 +0000 (0:00:00.163) 0:00:06.305 ******** 2026-03-28 00:23:38.722559 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCqQIdSVpl0xLSCq/6H44SeW2vzoC5el7uXp7S1qRL4fhpTbI104W4iCE7h81P6dmYK04kZ5e6jJzJzHHq8bcH0jXHYGY2LyACddfG3LpR6cNFroJQSFTnjgEf22LqS8hU64V9Mo+K3ggoN7uXDcAp8I/8TmPpepy4P5QpcHaa3A2MSLzMJl20Mge2lE6GfdSxEJMNenKvRR1586gIxiPYJt3FCi2TuJGhHO7wmThA9Wk0MfL9zYdIEiuNuhbX0p+ZV72XPT9RuQ3oAwzk68dfQ8PQPEXnjfUzvcZPaYyUbzdI1lZgdcxL7mrZOMTn5B0RWX2jT8gLBC4YpBrX5Xwe3RftB4DDAKEteOI9ftedMBR8HvYdHy/YHb7kxbsnV4XMeuZg2kvQFe49blAW/eQijnMX9z7bC8+1De/w+eOcPGFPZHLSX0ioKlr92ZUEkTYN89csTD5yVZ0TTAQ6zLsFFZqOIglP5Lu6IoQXrkQKRnsyLXqzKziEiUSApOZa0ks=) 2026-03-28 00:23:38.722602 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNfkUo3D5UPtW0FvfEsURWgxO/ob+iuEZivmBpvtz43Av1Xv31daH8FOzbetisYFM1jRgTB+6svb0kBuujMugxk=) 2026-03-28 00:23:38.722618 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEZumHZPl1Ssl6QGzuDLwqMViMEzVuk7L0KkxAvrf3V0) 2026-03-28 00:23:38.722632 | orchestrator | 2026-03-28 00:23:38.722645 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:38.722658 | orchestrator | Saturday 28 March 2026 00:23:33 +0000 (0:00:01.135) 0:00:07.440 ******** 2026-03-28 00:23:38.722670 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOj5Q3uqhCU4StOBrZxi9IvMZKXrOVWZco4G6iLHWicImbfmzrSSMR4an5Bk4QTYT6VfcwRXeeDEaaux5X5wfw0=) 2026-03-28 00:23:38.722712 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj5K+U3RnQNZBv63Ygg+Uacc4n5rJu+2zAZ+pwqBeFbHDgovCpYvqCo93Pw3Ti8R8hVmFpx1vLRU3ZluDbRSJS/XzC+eM2ZAfA20COjm9rgzT9nbFqubDEjv02qZNY4Q33RILjIY85MJYESiGDQL+OOqLUU60fYTrL9xfriwaiUPsb8eMpcpYKSZahVDjEYy47uCstB4a7JX1O/7EsDng/6F6ni+zW0sJJLFcR+KlxOc3zF9m+zyIdqKXSbeMjnL7YPxqW/EpnaP8a4CvvEWe8HW3G8u8q3Kg25E6jsC1dcawhxiff+XwM7/ZGeiLJl+i94lQa3s43oS7uO7VvyAA64arMKYsZaYklC5PHuJbBwBiZLdetHhisHthCRHCWyI1btHKzS7AwtI7XeZmyt9NlOE8ZRcbFM2+Z9W3H3r6BotxjYLDxvDVifpV6wjDxvHQi9Ammvs8WUpY1qaR9ZOhtkmqw+58jCX6mcD75D5v/LzdlJcKMNkzf2Rgg9ZPM0U0=) 2026-03-28 00:23:38.722727 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOYRCVNyTcrodCuDk1ElvRzxm4CVpL6cvFaSx4bLAGD3) 2026-03-28 00:23:38.722739 | orchestrator | 2026-03-28 00:23:38.722751 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:38.722764 | orchestrator | Saturday 28 March 2026 00:23:34 +0000 (0:00:01.053) 0:00:08.494 ******** 2026-03-28 00:23:38.722777 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHwL8h4uSmPZe8eZ7eibIbxqMcMSbA1WsRttj6IZj7SmxEfSm2sA5XN3SuucptKSw1X/K2NzNJSkPdalVMsAwPA=) 2026-03-28 00:23:38.722790 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEFhuXr9sje84Yo6Sx/jgXJY8h0Wis9vL7XrSHhNfcNY) 2026-03-28 00:23:38.722805 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXdrCHnFXi7QFo0cZjsTGGOrep6NdOgEM6lVZBGnWXBpqxq/WY8EQdqYuU/JAqU9fFHr3rxIFj/gSTzd00a0vaqogVpcGZeSYpOwxDquurYkxWhtFEpbTRG8TTOBJYcjzcctQLThdm7Y1HcPLhijZaxxHTYSPH38Ip+258qvB5IvSr1V5rBqt6mzHWC5DSxdqQK4WXlQ4DqwxLr7UrCZGx/HICpOJP3nGNbGK/C5CPbOMqQocRCKC059TUyeWL/FtPmWIUlhiV4ndf8wqP5J8vN7SCJSuC2cdWaBGyLe+HqsmoVUoXw0EcAziLRp8ef2dlnznNLGFyIg/jvCXTS1U8XVnfJZvF1NgCraCJDRq0/ZMvI1yTWzHIdop7un/O2fBrZVYVTU/rWG8xyn6CFQs0KIbGGTYNfFj+XhDUDrupsXSBh3rCGKwDDoTXOHQCulCwSdXYO+au0uP/dOoGgm94h1Bkr1GkXSGLLFXDUls8LndNyMXFA03TeM8pxut/q7s=) 2026-03-28 00:23:38.722818 | orchestrator | 2026-03-28 00:23:38.722831 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:38.722844 | orchestrator | Saturday 28 March 2026 00:23:35 +0000 (0:00:01.057) 0:00:09.551 ******** 2026-03-28 00:23:38.722857 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRwPE+VPIRWEs6paYopdg4PKl7sCyAuuJlsXCF9wzabWvxPEEUtCioYUqRuFxUHeztJxyGupA7hLw4+/TKb2w7fMmQIntGY2v3FMc/M6VqLZawFAz4GVPPym0rPxNy9LJcy18YkhJPeCY79KWmNs+CTfttt0B7cOCVc/tWC6IOrtyEuz6gVtz3ncafGZPLUc2FkOg3WtGOOY/k8qtcOcOFxPGywt4WtIdhKk1nKuhKJEGHS9T/0yrNawCwsh9IO4h8EMdsLDxhGHUi/We4jH8iTXBuU+QUOof0GAWjN7U31rmUub3BtFfQ6I8a7Vsrq9XOZAyg0awllO0PRtM/TpSmxb6kFTz4HO+eE8zefhs3SLTqFmyoul0b7oyWqkX5dZdMXdyG6h2VRtaBfZMwzkKulxhmBdrsA/fuqdSxsLxP8WeaW5UTHHpsPni2WR1e9Q689x1M5GAHYIAbT5Z1gS1go0JXZEWn13oU5l1sN6kdPHIUvStwx1+rZNaw4X8pIR0=) 2026-03-28 00:23:38.722893 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNppI01PRYdi23j+uFrjxvvE8fdKbGYdjlyK6BfQXR+m6bjMCwGQqlleX7h8KOaBr1OLZkhjwIB19JopR2H++Ns=) 2026-03-28 00:23:38.722906 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFT2MXFc4ubBVvBHvRzrvor0GUG0uygZBoerZyCSCrPg) 2026-03-28 00:23:38.722919 | orchestrator | 2026-03-28 00:23:38.722930 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:38.722941 | orchestrator | Saturday 28 March 2026 00:23:36 +0000 (0:00:01.042) 0:00:10.594 ******** 2026-03-28 00:23:38.723032 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxbmXR0RKX4ZDW4dRKWaeshP8k2b9TX/e8374xcnOij6xKWJXAgsHNNKeKiS1mZ8nWUZqY6UTqZ4UGoqdDZdsn1g6AybMNgj9ZH/vELBOoySAL1to3Nh8cvGHJbSChHfSNQhyuCGvISQvgMLUWuJkAcWhSSSCkXrJ+mclXhIoOFHFkl8UKsX3HuT2lQP+kzrZ7wpbHS/crbhaU+2DrvqT66lQKbYIOxuJLd4zXVhvvTRNL0moHJRwEcgHTxG0JyPnoD7SmzQqT+D7Eq1UyES+e7WGOiyiy6aOHmeRsIr2uh/oqSPQqh9p+5aDEhKQOKCByHKLLj1k6TgTOhUAgZDlKSi386va+LUrv0yMTdTJECBYS3w1e+CflNy70YsSQbZHlfkP3EG0+2bgmsvGkdovpApLnO0hu3lIa/AIcYdsIm3fZgsd6968QumLZu7LjutihIPOx+QW9mKzt7kSc7eMF+OMuc5oLRtUqM1mRebVc90OsduZcds1zrhGxyN6Bq7E=) 2026-03-28 00:23:38.723044 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGlWiv0pi/5rxYXXQpBt3vYsOC4TvYws4GobYcF2119cTgFDWOCVoUdNxFbrAcHiTVpXGFVyfOJqf+QIUb6ovzY=) 2026-03-28 00:23:38.723055 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHY9mEF17RReh4z1k2ptIMNYrWMZJVnnq92FIIliuhFi) 2026-03-28 00:23:38.723067 | orchestrator | 2026-03-28 00:23:38.723078 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:38.723089 | orchestrator | Saturday 28 March 2026 00:23:37 +0000 (0:00:01.087) 0:00:11.682 ******** 2026-03-28 00:23:38.723109 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+Rz3xpfxuci6v2KZRA/YPfQ++0sEYk6wfNdLzAL0100HX2Qc4Ln+Id7la8DLpncUeY1m0M/wI0ts9sDp73byPjguKOLCEY6+vmekymI75ZMke3HAaeQG0tW0ULXppcuDCKKxulR33xeNvpT9+NleFFfqVghJbbpJ8+9I+kBG3VuBBwllRsXdtlXNKMLCmUhIHuvAbWBWvlVZrwx+lGEg4fxkBfNUh9fE16q+UntEPoQAzt0Ih1ajV6AdaADwz0Tp1ESqRhKTaIQTAAHTuA21J1abovbZ1B3LeZ55gF0HDj0ybYWD7/ks0to0RGETT0wF44L8MEsrJrjec6iX/G69O8aoqZXw82eu5SApjPcrvpMxgQTSWMLp9QhCrQ8SV5ZLTg4pVA/Fccubpzb8XbD2FkrXeIN5+9N/yVTK7ceeLaJBH2QFfUZ9K0AVCiZxKgFXucB9jDJUj440LTboyTDVUDvKN6CPRPjCcu+oUFhl4CZJQ72Thf/dD+pa5zPodpRk=) 2026-03-28 00:23:49.478570 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIMEj3Rn7CCXncJMMeAgDQNF5rPmc33Z/NY0PjGyfL79yrY0K27M2f2yEXAJs8oZlYJRjeA6Qzw500C7couN+cM=) 2026-03-28 00:23:49.478708 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJewCAG2A1W2Yxnw3RriQgPaCoITUU4JXomVjKpGZKjt) 2026-03-28 00:23:49.478740 | orchestrator | 2026-03-28 00:23:49.478761 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:49.478781 | orchestrator | Saturday 28 March 2026 00:23:38 +0000 (0:00:01.067) 0:00:12.749 ******** 2026-03-28 00:23:49.478803 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDK/T0Ggcd0Km8AUgpnJ8+IOMxrYY73X0GnxHXrUI4wFuC8KfwN7FUEtv3Y9WV17AlIAwoBLklgYyF2NfcgBr/t2KjYzKpl77GtJsbHTdLrz4WYzfaSDEYLgjtd7Ws/UbqdZnMslHzptZVhRAslf7p4zb0PbjzJ1kxKabaYG0oiWxzVCduOOhB/rM9xiM3x+JfZdwBuezUULa0JEc4e4FlNvXyFM/m54L0z2xUgsODe/dvqF9sj0gyumNUQYfo2BoInpQngSNIKKkuLY36zp1JsHjYxLvbnXiijW2ZXDbERrRHPiKydYKL0jXX/zyP1PCgZexoLruK0vB9N29DpuFelK8XD+kyJobo6cmYpZOlkzZItELW9yAfWTg98hv4zl4OY2TMhuIVpps9FzNJKczqnkKXvWkD9uZKf20QkL4EqU8jh7ba0K08iY7d6Nayn9BvEeKD/YoPbHOBCIrhvzMjKh0oefXbrwTTt3PyLHnScEA3S8yvkavG8qB/oXF0ELtM=) 2026-03-28 00:23:49.478826 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIumM2K4hCDIhov0t0DGO+y84dYSE/WW8elNucL2I57K) 2026-03-28 00:23:49.478881 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMhWqaYggw7sqcetSmy/F4ElqN9q/TwBCoDBtRzwj2ZKtCQ/XmwNkalYyZfN1XKCB5wk8ncdRM978AX3vr/9r6E=) 2026-03-28 00:23:49.478902 | orchestrator | 2026-03-28 00:23:49.478913 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-28 00:23:49.478925 | orchestrator | Saturday 28 March 2026 00:23:39 +0000 (0:00:01.046) 0:00:13.795 ******** 2026-03-28 00:23:49.478936 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-28 00:23:49.478948 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-28 00:23:49.478958 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-28 00:23:49.478969 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-28 00:23:49.478980 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-28 00:23:49.478991 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-28 00:23:49.479001 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-28 00:23:49.479012 | orchestrator | 2026-03-28 00:23:49.479023 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-28 00:23:49.479038 | orchestrator | Saturday 28 March 2026 00:23:45 +0000 (0:00:05.259) 0:00:19.055 ******** 2026-03-28 00:23:49.479058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-28 00:23:49.479080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-28 00:23:49.479099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-28 00:23:49.479117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-28 00:23:49.479135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-28 00:23:49.479153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-28 00:23:49.479171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-28 00:23:49.479190 | orchestrator | 2026-03-28 00:23:49.479208 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:49.479227 | orchestrator | Saturday 28 March 2026 00:23:45 +0000 (0:00:00.176) 0:00:19.231 ******** 2026-03-28 00:23:49.479290 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCqQIdSVpl0xLSCq/6H44SeW2vzoC5el7uXp7S1qRL4fhpTbI104W4iCE7h81P6dmYK04kZ5e6jJzJzHHq8bcH0jXHYGY2LyACddfG3LpR6cNFroJQSFTnjgEf22LqS8hU64V9Mo+K3ggoN7uXDcAp8I/8TmPpepy4P5QpcHaa3A2MSLzMJl20Mge2lE6GfdSxEJMNenKvRR1586gIxiPYJt3FCi2TuJGhHO7wmThA9Wk0MfL9zYdIEiuNuhbX0p+ZV72XPT9RuQ3oAwzk68dfQ8PQPEXnjfUzvcZPaYyUbzdI1lZgdcxL7mrZOMTn5B0RWX2jT8gLBC4YpBrX5Xwe3RftB4DDAKEteOI9ftedMBR8HvYdHy/YHb7kxbsnV4XMeuZg2kvQFe49blAW/eQijnMX9z7bC8+1De/w+eOcPGFPZHLSX0ioKlr92ZUEkTYN89csTD5yVZ0TTAQ6zLsFFZqOIglP5Lu6IoQXrkQKRnsyLXqzKziEiUSApOZa0ks=) 2026-03-28 00:23:49.479364 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNfkUo3D5UPtW0FvfEsURWgxO/ob+iuEZivmBpvtz43Av1Xv31daH8FOzbetisYFM1jRgTB+6svb0kBuujMugxk=) 2026-03-28 00:23:49.479401 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEZumHZPl1Ssl6QGzuDLwqMViMEzVuk7L0KkxAvrf3V0) 2026-03-28 00:23:49.479421 | orchestrator | 2026-03-28 00:23:49.479436 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:49.479447 | orchestrator | Saturday 28 March 2026 00:23:46 +0000 (0:00:01.044) 0:00:20.276 ******** 2026-03-28 00:23:49.479458 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOYRCVNyTcrodCuDk1ElvRzxm4CVpL6cvFaSx4bLAGD3) 2026-03-28 00:23:49.479476 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj5K+U3RnQNZBv63Ygg+Uacc4n5rJu+2zAZ+pwqBeFbHDgovCpYvqCo93Pw3Ti8R8hVmFpx1vLRU3ZluDbRSJS/XzC+eM2ZAfA20COjm9rgzT9nbFqubDEjv02qZNY4Q33RILjIY85MJYESiGDQL+OOqLUU60fYTrL9xfriwaiUPsb8eMpcpYKSZahVDjEYy47uCstB4a7JX1O/7EsDng/6F6ni+zW0sJJLFcR+KlxOc3zF9m+zyIdqKXSbeMjnL7YPxqW/EpnaP8a4CvvEWe8HW3G8u8q3Kg25E6jsC1dcawhxiff+XwM7/ZGeiLJl+i94lQa3s43oS7uO7VvyAA64arMKYsZaYklC5PHuJbBwBiZLdetHhisHthCRHCWyI1btHKzS7AwtI7XeZmyt9NlOE8ZRcbFM2+Z9W3H3r6BotxjYLDxvDVifpV6wjDxvHQi9Ammvs8WUpY1qaR9ZOhtkmqw+58jCX6mcD75D5v/LzdlJcKMNkzf2Rgg9ZPM0U0=) 2026-03-28 00:23:49.479487 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOj5Q3uqhCU4StOBrZxi9IvMZKXrOVWZco4G6iLHWicImbfmzrSSMR4an5Bk4QTYT6VfcwRXeeDEaaux5X5wfw0=) 2026-03-28 00:23:49.479498 | orchestrator | 2026-03-28 00:23:49.479509 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:49.479520 | orchestrator | Saturday 28 March 2026 00:23:47 +0000 (0:00:01.095) 0:00:21.371 ******** 2026-03-28 00:23:49.479531 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXdrCHnFXi7QFo0cZjsTGGOrep6NdOgEM6lVZBGnWXBpqxq/WY8EQdqYuU/JAqU9fFHr3rxIFj/gSTzd00a0vaqogVpcGZeSYpOwxDquurYkxWhtFEpbTRG8TTOBJYcjzcctQLThdm7Y1HcPLhijZaxxHTYSPH38Ip+258qvB5IvSr1V5rBqt6mzHWC5DSxdqQK4WXlQ4DqwxLr7UrCZGx/HICpOJP3nGNbGK/C5CPbOMqQocRCKC059TUyeWL/FtPmWIUlhiV4ndf8wqP5J8vN7SCJSuC2cdWaBGyLe+HqsmoVUoXw0EcAziLRp8ef2dlnznNLGFyIg/jvCXTS1U8XVnfJZvF1NgCraCJDRq0/ZMvI1yTWzHIdop7un/O2fBrZVYVTU/rWG8xyn6CFQs0KIbGGTYNfFj+XhDUDrupsXSBh3rCGKwDDoTXOHQCulCwSdXYO+au0uP/dOoGgm94h1Bkr1GkXSGLLFXDUls8LndNyMXFA03TeM8pxut/q7s=) 2026-03-28 00:23:49.479543 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHwL8h4uSmPZe8eZ7eibIbxqMcMSbA1WsRttj6IZj7SmxEfSm2sA5XN3SuucptKSw1X/K2NzNJSkPdalVMsAwPA=) 2026-03-28 00:23:49.479554 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEFhuXr9sje84Yo6Sx/jgXJY8h0Wis9vL7XrSHhNfcNY) 2026-03-28 00:23:49.479564 | orchestrator | 2026-03-28 00:23:49.479575 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:49.479586 | orchestrator | Saturday 28 March 2026 00:23:48 +0000 (0:00:01.094) 0:00:22.466 ******** 2026-03-28 00:23:49.479596 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFT2MXFc4ubBVvBHvRzrvor0GUG0uygZBoerZyCSCrPg) 2026-03-28 00:23:49.479607 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRwPE+VPIRWEs6paYopdg4PKl7sCyAuuJlsXCF9wzabWvxPEEUtCioYUqRuFxUHeztJxyGupA7hLw4+/TKb2w7fMmQIntGY2v3FMc/M6VqLZawFAz4GVPPym0rPxNy9LJcy18YkhJPeCY79KWmNs+CTfttt0B7cOCVc/tWC6IOrtyEuz6gVtz3ncafGZPLUc2FkOg3WtGOOY/k8qtcOcOFxPGywt4WtIdhKk1nKuhKJEGHS9T/0yrNawCwsh9IO4h8EMdsLDxhGHUi/We4jH8iTXBuU+QUOof0GAWjN7U31rmUub3BtFfQ6I8a7Vsrq9XOZAyg0awllO0PRtM/TpSmxb6kFTz4HO+eE8zefhs3SLTqFmyoul0b7oyWqkX5dZdMXdyG6h2VRtaBfZMwzkKulxhmBdrsA/fuqdSxsLxP8WeaW5UTHHpsPni2WR1e9Q689x1M5GAHYIAbT5Z1gS1go0JXZEWn13oU5l1sN6kdPHIUvStwx1+rZNaw4X8pIR0=) 2026-03-28 00:23:49.479632 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNppI01PRYdi23j+uFrjxvvE8fdKbGYdjlyK6BfQXR+m6bjMCwGQqlleX7h8KOaBr1OLZkhjwIB19JopR2H++Ns=) 2026-03-28 00:23:53.976760 | orchestrator | 2026-03-28 00:23:53.977636 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:53.977694 | orchestrator | Saturday 28 March 2026 00:23:49 +0000 (0:00:01.039) 0:00:23.505 ******** 2026-03-28 00:23:53.977718 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGlWiv0pi/5rxYXXQpBt3vYsOC4TvYws4GobYcF2119cTgFDWOCVoUdNxFbrAcHiTVpXGFVyfOJqf+QIUb6ovzY=) 2026-03-28 00:23:53.977744 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxbmXR0RKX4ZDW4dRKWaeshP8k2b9TX/e8374xcnOij6xKWJXAgsHNNKeKiS1mZ8nWUZqY6UTqZ4UGoqdDZdsn1g6AybMNgj9ZH/vELBOoySAL1to3Nh8cvGHJbSChHfSNQhyuCGvISQvgMLUWuJkAcWhSSSCkXrJ+mclXhIoOFHFkl8UKsX3HuT2lQP+kzrZ7wpbHS/crbhaU+2DrvqT66lQKbYIOxuJLd4zXVhvvTRNL0moHJRwEcgHTxG0JyPnoD7SmzQqT+D7Eq1UyES+e7WGOiyiy6aOHmeRsIr2uh/oqSPQqh9p+5aDEhKQOKCByHKLLj1k6TgTOhUAgZDlKSi386va+LUrv0yMTdTJECBYS3w1e+CflNy70YsSQbZHlfkP3EG0+2bgmsvGkdovpApLnO0hu3lIa/AIcYdsIm3fZgsd6968QumLZu7LjutihIPOx+QW9mKzt7kSc7eMF+OMuc5oLRtUqM1mRebVc90OsduZcds1zrhGxyN6Bq7E=) 2026-03-28 00:23:53.977768 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHY9mEF17RReh4z1k2ptIMNYrWMZJVnnq92FIIliuhFi) 2026-03-28 00:23:53.977786 | orchestrator | 2026-03-28 00:23:53.977805 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:53.977823 | orchestrator | Saturday 28 March 2026 00:23:50 +0000 (0:00:01.071) 0:00:24.577 ******** 2026-03-28 00:23:53.977842 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJewCAG2A1W2Yxnw3RriQgPaCoITUU4JXomVjKpGZKjt) 2026-03-28 00:23:53.977861 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+Rz3xpfxuci6v2KZRA/YPfQ++0sEYk6wfNdLzAL0100HX2Qc4Ln+Id7la8DLpncUeY1m0M/wI0ts9sDp73byPjguKOLCEY6+vmekymI75ZMke3HAaeQG0tW0ULXppcuDCKKxulR33xeNvpT9+NleFFfqVghJbbpJ8+9I+kBG3VuBBwllRsXdtlXNKMLCmUhIHuvAbWBWvlVZrwx+lGEg4fxkBfNUh9fE16q+UntEPoQAzt0Ih1ajV6AdaADwz0Tp1ESqRhKTaIQTAAHTuA21J1abovbZ1B3LeZ55gF0HDj0ybYWD7/ks0to0RGETT0wF44L8MEsrJrjec6iX/G69O8aoqZXw82eu5SApjPcrvpMxgQTSWMLp9QhCrQ8SV5ZLTg4pVA/Fccubpzb8XbD2FkrXeIN5+9N/yVTK7ceeLaJBH2QFfUZ9K0AVCiZxKgFXucB9jDJUj440LTboyTDVUDvKN6CPRPjCcu+oUFhl4CZJQ72Thf/dD+pa5zPodpRk=) 2026-03-28 00:23:53.977881 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIMEj3Rn7CCXncJMMeAgDQNF5rPmc33Z/NY0PjGyfL79yrY0K27M2f2yEXAJs8oZlYJRjeA6Qzw500C7couN+cM=) 2026-03-28 00:23:53.977901 | orchestrator | 2026-03-28 00:23:53.977917 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:23:53.977933 | orchestrator | Saturday 28 March 2026 00:23:51 +0000 (0:00:01.086) 0:00:25.664 ******** 2026-03-28 00:23:53.977977 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDK/T0Ggcd0Km8AUgpnJ8+IOMxrYY73X0GnxHXrUI4wFuC8KfwN7FUEtv3Y9WV17AlIAwoBLklgYyF2NfcgBr/t2KjYzKpl77GtJsbHTdLrz4WYzfaSDEYLgjtd7Ws/UbqdZnMslHzptZVhRAslf7p4zb0PbjzJ1kxKabaYG0oiWxzVCduOOhB/rM9xiM3x+JfZdwBuezUULa0JEc4e4FlNvXyFM/m54L0z2xUgsODe/dvqF9sj0gyumNUQYfo2BoInpQngSNIKKkuLY36zp1JsHjYxLvbnXiijW2ZXDbERrRHPiKydYKL0jXX/zyP1PCgZexoLruK0vB9N29DpuFelK8XD+kyJobo6cmYpZOlkzZItELW9yAfWTg98hv4zl4OY2TMhuIVpps9FzNJKczqnkKXvWkD9uZKf20QkL4EqU8jh7ba0K08iY7d6Nayn9BvEeKD/YoPbHOBCIrhvzMjKh0oefXbrwTTt3PyLHnScEA3S8yvkavG8qB/oXF0ELtM=) 2026-03-28 00:23:53.977997 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMhWqaYggw7sqcetSmy/F4ElqN9q/TwBCoDBtRzwj2ZKtCQ/XmwNkalYyZfN1XKCB5wk8ncdRM978AX3vr/9r6E=) 2026-03-28 00:23:53.978082 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIumM2K4hCDIhov0t0DGO+y84dYSE/WW8elNucL2I57K) 2026-03-28 00:23:53.978110 | orchestrator | 2026-03-28 00:23:53.978128 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-28 00:23:53.978177 | orchestrator | Saturday 28 March 2026 00:23:52 +0000 (0:00:01.079) 0:00:26.743 ******** 2026-03-28 00:23:53.978197 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-28 00:23:53.978217 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-28 00:23:53.978235 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-28 00:23:53.978252 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-28 00:23:53.978270 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-28 00:23:53.978287 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-28 00:23:53.978335 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-28 00:23:53.978355 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:23:53.978374 | orchestrator | 2026-03-28 00:23:53.978420 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-28 00:23:53.978439 | orchestrator | Saturday 28 March 2026 00:23:52 +0000 (0:00:00.188) 0:00:26.931 ******** 2026-03-28 00:23:53.978456 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:23:53.978476 | orchestrator | 2026-03-28 00:23:53.978494 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-28 00:23:53.978511 | orchestrator | Saturday 28 March 2026 00:23:52 +0000 (0:00:00.060) 0:00:26.992 ******** 2026-03-28 00:23:53.978529 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:23:53.978548 | orchestrator | 2026-03-28 00:23:53.978565 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-28 00:23:53.978584 | orchestrator | Saturday 28 March 2026 00:23:53 +0000 (0:00:00.061) 0:00:27.053 ******** 2026-03-28 00:23:53.978601 | orchestrator | changed: [testbed-manager] 2026-03-28 00:23:53.978618 | orchestrator | 2026-03-28 00:23:53.978637 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:23:53.978655 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 00:23:53.978675 | orchestrator | 2026-03-28 00:23:53.978694 | orchestrator | 2026-03-28 00:23:53.978712 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:23:53.978731 | orchestrator | Saturday 28 March 2026 00:23:53 +0000 (0:00:00.730) 0:00:27.784 ******** 2026-03-28 00:23:53.978761 | orchestrator | =============================================================================== 2026-03-28 00:23:53.978779 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.98s 2026-03-28 00:23:53.978797 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.26s 2026-03-28 00:23:53.978814 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-28 00:23:53.978830 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-28 00:23:53.978846 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-28 00:23:53.978863 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-28 00:23:53.978973 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-28 00:23:53.978992 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-28 00:23:53.979010 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-28 00:23:53.979027 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-28 00:23:53.979045 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-28 00:23:53.979064 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-28 00:23:53.979083 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-28 00:23:53.979101 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-28 00:23:53.979193 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-28 00:23:53.979217 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-28 00:23:53.979237 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.73s 2026-03-28 00:23:53.979255 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2026-03-28 00:23:53.979352 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-03-28 00:23:53.979374 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-03-28 00:23:54.270361 | orchestrator | + osism apply squid 2026-03-28 00:24:06.265370 | orchestrator | 2026-03-28 00:24:06 | INFO  | Task cdd22f13-73b7-423c-abb9-127d35e96c01 (squid) was prepared for execution. 2026-03-28 00:24:06.265463 | orchestrator | 2026-03-28 00:24:06 | INFO  | It takes a moment until task cdd22f13-73b7-423c-abb9-127d35e96c01 (squid) has been started and output is visible here. 2026-03-28 00:26:04.163096 | orchestrator | 2026-03-28 00:26:04.163243 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-28 00:26:04.163262 | orchestrator | 2026-03-28 00:26:04.163275 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-28 00:26:04.163286 | orchestrator | Saturday 28 March 2026 00:24:10 +0000 (0:00:00.164) 0:00:00.164 ******** 2026-03-28 00:26:04.163298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:26:04.163311 | orchestrator | 2026-03-28 00:26:04.163322 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-28 00:26:04.163333 | orchestrator | Saturday 28 March 2026 00:24:10 +0000 (0:00:00.099) 0:00:00.263 ******** 2026-03-28 00:26:04.163345 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:04.163365 | orchestrator | 2026-03-28 00:26:04.163385 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-28 00:26:04.163403 | orchestrator | Saturday 28 March 2026 00:24:12 +0000 (0:00:01.502) 0:00:01.766 ******** 2026-03-28 00:26:04.163423 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-28 00:26:04.163440 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-28 00:26:04.163458 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-28 00:26:04.163476 | orchestrator | 2026-03-28 00:26:04.163493 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-28 00:26:04.163510 | orchestrator | Saturday 28 March 2026 00:24:13 +0000 (0:00:01.151) 0:00:02.918 ******** 2026-03-28 00:26:04.163528 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-28 00:26:04.163545 | orchestrator | 2026-03-28 00:26:04.163565 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-28 00:26:04.163583 | orchestrator | Saturday 28 March 2026 00:24:14 +0000 (0:00:01.069) 0:00:03.987 ******** 2026-03-28 00:26:04.163601 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:04.163621 | orchestrator | 2026-03-28 00:26:04.163639 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-28 00:26:04.163658 | orchestrator | Saturday 28 March 2026 00:24:14 +0000 (0:00:00.374) 0:00:04.361 ******** 2026-03-28 00:26:04.163672 | orchestrator | changed: [testbed-manager] 2026-03-28 00:26:04.163683 | orchestrator | 2026-03-28 00:26:04.163694 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-28 00:26:04.163705 | orchestrator | Saturday 28 March 2026 00:24:15 +0000 (0:00:00.940) 0:00:05.302 ******** 2026-03-28 00:26:04.163716 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-28 00:26:04.163733 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:04.163744 | orchestrator | 2026-03-28 00:26:04.163755 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-28 00:26:04.163868 | orchestrator | Saturday 28 March 2026 00:24:50 +0000 (0:00:35.270) 0:00:40.572 ******** 2026-03-28 00:26:04.163883 | orchestrator | changed: [testbed-manager] 2026-03-28 00:26:04.163894 | orchestrator | 2026-03-28 00:26:04.163906 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-28 00:26:04.163916 | orchestrator | Saturday 28 March 2026 00:25:03 +0000 (0:00:12.199) 0:00:52.772 ******** 2026-03-28 00:26:04.163928 | orchestrator | Pausing for 60 seconds 2026-03-28 00:26:04.163939 | orchestrator | changed: [testbed-manager] 2026-03-28 00:26:04.163950 | orchestrator | 2026-03-28 00:26:04.163961 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-28 00:26:04.163972 | orchestrator | Saturday 28 March 2026 00:26:03 +0000 (0:01:00.094) 0:01:52.866 ******** 2026-03-28 00:26:04.163983 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:04.163994 | orchestrator | 2026-03-28 00:26:04.164005 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-28 00:26:04.164015 | orchestrator | Saturday 28 March 2026 00:26:03 +0000 (0:00:00.071) 0:01:52.938 ******** 2026-03-28 00:26:04.164026 | orchestrator | changed: [testbed-manager] 2026-03-28 00:26:04.164037 | orchestrator | 2026-03-28 00:26:04.164048 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:26:04.164059 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:26:04.164070 | orchestrator | 2026-03-28 00:26:04.164081 | orchestrator | 2026-03-28 00:26:04.164091 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:26:04.164102 | orchestrator | Saturday 28 March 2026 00:26:03 +0000 (0:00:00.669) 0:01:53.608 ******** 2026-03-28 00:26:04.164113 | orchestrator | =============================================================================== 2026-03-28 00:26:04.164143 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-28 00:26:04.164154 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.27s 2026-03-28 00:26:04.164165 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.20s 2026-03-28 00:26:04.164176 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.50s 2026-03-28 00:26:04.164196 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.15s 2026-03-28 00:26:04.164242 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2026-03-28 00:26:04.164262 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.94s 2026-03-28 00:26:04.164281 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.67s 2026-03-28 00:26:04.164298 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-03-28 00:26:04.164315 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-03-28 00:26:04.164327 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-28 00:26:04.505085 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-28 00:26:04.505396 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-28 00:26:04.562127 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:26:04.562318 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-28 00:26:04.569848 | orchestrator | + set -e 2026-03-28 00:26:04.569916 | orchestrator | + NAMESPACE=kolla/release 2026-03-28 00:26:04.569930 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-28 00:26:04.577383 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-28 00:26:04.649727 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-28 00:26:04.651252 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-28 00:26:16.747555 | orchestrator | 2026-03-28 00:26:16 | INFO  | Task 8e614ba1-d398-4278-8ad9-5400593e3b39 (operator) was prepared for execution. 2026-03-28 00:26:16.747671 | orchestrator | 2026-03-28 00:26:16 | INFO  | It takes a moment until task 8e614ba1-d398-4278-8ad9-5400593e3b39 (operator) has been started and output is visible here. 2026-03-28 00:26:32.886974 | orchestrator | 2026-03-28 00:26:32.887088 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-28 00:26:32.887109 | orchestrator | 2026-03-28 00:26:32.887123 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:26:32.887137 | orchestrator | Saturday 28 March 2026 00:26:20 +0000 (0:00:00.141) 0:00:00.141 ******** 2026-03-28 00:26:32.887150 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:32.887164 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:32.887177 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:32.887249 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:32.887341 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:32.887356 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:32.887370 | orchestrator | 2026-03-28 00:26:32.887384 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-28 00:26:32.887394 | orchestrator | Saturday 28 March 2026 00:26:24 +0000 (0:00:03.314) 0:00:03.456 ******** 2026-03-28 00:26:32.887408 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:32.887422 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:32.887436 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:32.887448 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:32.887461 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:32.887474 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:32.887489 | orchestrator | 2026-03-28 00:26:32.887505 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-28 00:26:32.887520 | orchestrator | 2026-03-28 00:26:32.887534 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-28 00:26:32.887548 | orchestrator | Saturday 28 March 2026 00:26:24 +0000 (0:00:00.756) 0:00:04.213 ******** 2026-03-28 00:26:32.887561 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:32.887573 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:32.887586 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:32.887598 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:32.887610 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:32.887622 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:32.887634 | orchestrator | 2026-03-28 00:26:32.887647 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-28 00:26:32.887668 | orchestrator | Saturday 28 March 2026 00:26:25 +0000 (0:00:00.165) 0:00:04.378 ******** 2026-03-28 00:26:32.887682 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:32.887694 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:32.887711 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:32.887728 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:32.887740 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:32.887751 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:32.887764 | orchestrator | 2026-03-28 00:26:32.887794 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-28 00:26:32.887807 | orchestrator | Saturday 28 March 2026 00:26:25 +0000 (0:00:00.166) 0:00:04.544 ******** 2026-03-28 00:26:32.887823 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:32.887836 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:32.887850 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:32.887863 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:32.887876 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:32.887889 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:32.887901 | orchestrator | 2026-03-28 00:26:32.887914 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-28 00:26:32.887928 | orchestrator | Saturday 28 March 2026 00:26:25 +0000 (0:00:00.630) 0:00:05.174 ******** 2026-03-28 00:26:32.887939 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:32.887951 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:32.887964 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:32.887975 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:32.887987 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:32.888000 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:32.888034 | orchestrator | 2026-03-28 00:26:32.888045 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-28 00:26:32.888057 | orchestrator | Saturday 28 March 2026 00:26:26 +0000 (0:00:00.798) 0:00:05.973 ******** 2026-03-28 00:26:32.888069 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-28 00:26:32.888081 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-28 00:26:32.888091 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-28 00:26:32.888102 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-28 00:26:32.888113 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-28 00:26:32.888123 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-28 00:26:32.888133 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-28 00:26:32.888145 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-28 00:26:32.888157 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-28 00:26:32.888164 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-28 00:26:32.888171 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-28 00:26:32.888177 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-28 00:26:32.888225 | orchestrator | 2026-03-28 00:26:32.888237 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-28 00:26:32.888249 | orchestrator | Saturday 28 March 2026 00:26:27 +0000 (0:00:01.195) 0:00:07.168 ******** 2026-03-28 00:26:32.888261 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:32.888273 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:32.888284 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:32.888296 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:32.888307 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:32.888314 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:32.888323 | orchestrator | 2026-03-28 00:26:32.888333 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-28 00:26:32.888347 | orchestrator | Saturday 28 March 2026 00:26:29 +0000 (0:00:01.211) 0:00:08.380 ******** 2026-03-28 00:26:32.888357 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-28 00:26:32.888368 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-28 00:26:32.888380 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-28 00:26:32.888393 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:26:32.888427 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:26:32.888441 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:26:32.888456 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:26:32.888468 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:26:32.888479 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:26:32.888490 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-28 00:26:32.888501 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-28 00:26:32.888512 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-28 00:26:32.888523 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-28 00:26:32.888535 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-28 00:26:32.888542 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-28 00:26:32.888552 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:26:32.888562 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:26:32.888574 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:26:32.888586 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:26:32.888598 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:26:32.888621 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:26:32.888628 | orchestrator | 2026-03-28 00:26:32.888635 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-28 00:26:32.888643 | orchestrator | Saturday 28 March 2026 00:26:30 +0000 (0:00:01.212) 0:00:09.593 ******** 2026-03-28 00:26:32.888649 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:32.888656 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:32.888663 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:32.888669 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:32.888677 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:32.888689 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:32.888700 | orchestrator | 2026-03-28 00:26:32.888781 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-28 00:26:32.888815 | orchestrator | Saturday 28 March 2026 00:26:30 +0000 (0:00:00.169) 0:00:09.762 ******** 2026-03-28 00:26:32.888828 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:32.888840 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:32.888877 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:32.888892 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:32.888903 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:32.888916 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:32.888927 | orchestrator | 2026-03-28 00:26:32.888974 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-28 00:26:32.888987 | orchestrator | Saturday 28 March 2026 00:26:30 +0000 (0:00:00.235) 0:00:09.998 ******** 2026-03-28 00:26:32.888999 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:32.889010 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:32.889267 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:32.889288 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:32.889299 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:32.889310 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:32.889322 | orchestrator | 2026-03-28 00:26:32.889333 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-28 00:26:32.889345 | orchestrator | Saturday 28 March 2026 00:26:31 +0000 (0:00:00.889) 0:00:10.888 ******** 2026-03-28 00:26:32.889358 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:32.889367 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:32.889380 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:32.889399 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:32.889422 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:32.889434 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:32.889450 | orchestrator | 2026-03-28 00:26:32.889498 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-28 00:26:32.889511 | orchestrator | Saturday 28 March 2026 00:26:31 +0000 (0:00:00.199) 0:00:11.087 ******** 2026-03-28 00:26:32.889557 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 00:26:32.889569 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:32.889581 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 00:26:32.889592 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-28 00:26:32.889602 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-28 00:26:32.889615 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:32.889627 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:32.889638 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:32.889647 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 00:26:32.889657 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:32.889668 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 00:26:32.889677 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:32.889688 | orchestrator | 2026-03-28 00:26:32.889698 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-28 00:26:32.889705 | orchestrator | Saturday 28 March 2026 00:26:32 +0000 (0:00:00.711) 0:00:11.799 ******** 2026-03-28 00:26:32.889722 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:32.889733 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:32.889743 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:32.889752 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:32.889759 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:32.889765 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:32.889774 | orchestrator | 2026-03-28 00:26:32.889784 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-28 00:26:32.889795 | orchestrator | Saturday 28 March 2026 00:26:32 +0000 (0:00:00.170) 0:00:11.969 ******** 2026-03-28 00:26:32.889805 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:32.889815 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:32.889826 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:32.889836 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:32.889860 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:34.392341 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:34.392447 | orchestrator | 2026-03-28 00:26:34.392464 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-28 00:26:34.392478 | orchestrator | Saturday 28 March 2026 00:26:32 +0000 (0:00:00.169) 0:00:12.138 ******** 2026-03-28 00:26:34.392489 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:34.392500 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:34.392511 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:34.392522 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:34.392533 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:34.392543 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:34.392554 | orchestrator | 2026-03-28 00:26:34.392565 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-28 00:26:34.392576 | orchestrator | Saturday 28 March 2026 00:26:33 +0000 (0:00:00.159) 0:00:12.297 ******** 2026-03-28 00:26:34.392587 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:34.392598 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:34.392609 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:34.392620 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:34.392630 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:34.392641 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:34.392652 | orchestrator | 2026-03-28 00:26:34.392662 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-28 00:26:34.392673 | orchestrator | Saturday 28 March 2026 00:26:33 +0000 (0:00:00.806) 0:00:13.104 ******** 2026-03-28 00:26:34.392684 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:34.392695 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:34.392706 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:34.392717 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:34.392728 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:34.392739 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:34.392749 | orchestrator | 2026-03-28 00:26:34.392760 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:26:34.392792 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:26:34.392807 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:26:34.392820 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:26:34.392833 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:26:34.392846 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:26:34.392881 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:26:34.392894 | orchestrator | 2026-03-28 00:26:34.392906 | orchestrator | 2026-03-28 00:26:34.392919 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:26:34.392932 | orchestrator | Saturday 28 March 2026 00:26:34 +0000 (0:00:00.239) 0:00:13.343 ******** 2026-03-28 00:26:34.392944 | orchestrator | =============================================================================== 2026-03-28 00:26:34.392956 | orchestrator | Gathering Facts --------------------------------------------------------- 3.31s 2026-03-28 00:26:34.392969 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.21s 2026-03-28 00:26:34.392982 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.21s 2026-03-28 00:26:34.392993 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.20s 2026-03-28 00:26:34.393003 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.89s 2026-03-28 00:26:34.393014 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.81s 2026-03-28 00:26:34.393024 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2026-03-28 00:26:34.393035 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2026-03-28 00:26:34.393046 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-03-28 00:26:34.393071 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2026-03-28 00:26:34.393082 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2026-03-28 00:26:34.393093 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.24s 2026-03-28 00:26:34.393104 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2026-03-28 00:26:34.393114 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-03-28 00:26:34.393125 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-03-28 00:26:34.393136 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-03-28 00:26:34.393147 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2026-03-28 00:26:34.393157 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-03-28 00:26:34.393168 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-03-28 00:26:34.727559 | orchestrator | + osism apply --environment custom facts 2026-03-28 00:26:36.701727 | orchestrator | 2026-03-28 00:26:36 | INFO  | Trying to run play facts in environment custom 2026-03-28 00:26:46.896698 | orchestrator | 2026-03-28 00:26:46 | INFO  | Task 72008ed2-6530-4dd9-868f-495908da5849 (facts) was prepared for execution. 2026-03-28 00:26:46.896811 | orchestrator | 2026-03-28 00:26:46 | INFO  | It takes a moment until task 72008ed2-6530-4dd9-868f-495908da5849 (facts) has been started and output is visible here. 2026-03-28 00:27:33.037396 | orchestrator | 2026-03-28 00:27:33.037511 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-28 00:27:33.037527 | orchestrator | 2026-03-28 00:27:33.037539 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-28 00:27:33.037551 | orchestrator | Saturday 28 March 2026 00:26:51 +0000 (0:00:00.086) 0:00:00.086 ******** 2026-03-28 00:27:33.037562 | orchestrator | ok: [testbed-manager] 2026-03-28 00:27:33.037574 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:27:33.037586 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:27:33.037597 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:27:33.037609 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:27:33.037620 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:27:33.037656 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:27:33.037669 | orchestrator | 2026-03-28 00:27:33.037680 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-28 00:27:33.037691 | orchestrator | Saturday 28 March 2026 00:26:52 +0000 (0:00:01.403) 0:00:01.489 ******** 2026-03-28 00:27:33.037702 | orchestrator | ok: [testbed-manager] 2026-03-28 00:27:33.037713 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:27:33.037724 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:27:33.037735 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:27:33.037746 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:27:33.037756 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:27:33.037767 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:27:33.037778 | orchestrator | 2026-03-28 00:27:33.037790 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-28 00:27:33.037801 | orchestrator | 2026-03-28 00:27:33.037812 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-28 00:27:33.037823 | orchestrator | Saturday 28 March 2026 00:26:53 +0000 (0:00:01.265) 0:00:02.755 ******** 2026-03-28 00:27:33.037834 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:33.037845 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:33.037856 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:33.037866 | orchestrator | 2026-03-28 00:27:33.037877 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-28 00:27:33.037889 | orchestrator | Saturday 28 March 2026 00:26:53 +0000 (0:00:00.104) 0:00:02.860 ******** 2026-03-28 00:27:33.037900 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:33.037913 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:33.037926 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:33.037939 | orchestrator | 2026-03-28 00:27:33.037951 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-28 00:27:33.037963 | orchestrator | Saturday 28 March 2026 00:26:54 +0000 (0:00:00.223) 0:00:03.084 ******** 2026-03-28 00:27:33.037975 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:33.037988 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:33.038001 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:33.038012 | orchestrator | 2026-03-28 00:27:33.038093 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-28 00:27:33.038108 | orchestrator | Saturday 28 March 2026 00:26:54 +0000 (0:00:00.229) 0:00:03.313 ******** 2026-03-28 00:27:33.038122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:27:33.038136 | orchestrator | 2026-03-28 00:27:33.038149 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-28 00:27:33.038200 | orchestrator | Saturday 28 March 2026 00:26:54 +0000 (0:00:00.152) 0:00:03.466 ******** 2026-03-28 00:27:33.038211 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:33.038222 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:33.038233 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:33.038244 | orchestrator | 2026-03-28 00:27:33.038255 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-28 00:27:33.038265 | orchestrator | Saturday 28 March 2026 00:26:54 +0000 (0:00:00.474) 0:00:03.941 ******** 2026-03-28 00:27:33.038276 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:27:33.038287 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:27:33.038298 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:27:33.038308 | orchestrator | 2026-03-28 00:27:33.038319 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-28 00:27:33.038330 | orchestrator | Saturday 28 March 2026 00:26:55 +0000 (0:00:00.173) 0:00:04.115 ******** 2026-03-28 00:27:33.038341 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:27:33.038352 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:27:33.038362 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:27:33.038373 | orchestrator | 2026-03-28 00:27:33.038384 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-28 00:27:33.038404 | orchestrator | Saturday 28 March 2026 00:26:56 +0000 (0:00:01.087) 0:00:05.202 ******** 2026-03-28 00:27:33.038415 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:33.038426 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:33.038437 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:33.038447 | orchestrator | 2026-03-28 00:27:33.038458 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-28 00:27:33.038515 | orchestrator | Saturday 28 March 2026 00:26:56 +0000 (0:00:00.439) 0:00:05.641 ******** 2026-03-28 00:27:33.038528 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:27:33.038539 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:27:33.038550 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:27:33.038561 | orchestrator | 2026-03-28 00:27:33.038572 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-28 00:27:33.038583 | orchestrator | Saturday 28 March 2026 00:26:57 +0000 (0:00:01.085) 0:00:06.727 ******** 2026-03-28 00:27:33.038594 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:27:33.038604 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:27:33.038615 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:27:33.038626 | orchestrator | 2026-03-28 00:27:33.038637 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-28 00:27:33.038648 | orchestrator | Saturday 28 March 2026 00:27:14 +0000 (0:00:16.859) 0:00:23.586 ******** 2026-03-28 00:27:33.038659 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:27:33.038670 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:27:33.038681 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:27:33.038692 | orchestrator | 2026-03-28 00:27:33.038703 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-28 00:27:33.038732 | orchestrator | Saturday 28 March 2026 00:27:14 +0000 (0:00:00.096) 0:00:23.683 ******** 2026-03-28 00:27:33.038744 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:27:33.038755 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:27:33.038765 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:27:33.038776 | orchestrator | 2026-03-28 00:27:33.038788 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-28 00:27:33.038807 | orchestrator | Saturday 28 March 2026 00:27:23 +0000 (0:00:08.810) 0:00:32.494 ******** 2026-03-28 00:27:33.038824 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:33.038842 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:33.038859 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:33.038876 | orchestrator | 2026-03-28 00:27:33.038893 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-28 00:27:33.038911 | orchestrator | Saturday 28 March 2026 00:27:23 +0000 (0:00:00.471) 0:00:32.966 ******** 2026-03-28 00:27:33.038927 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-28 00:27:33.038944 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-28 00:27:33.038962 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-28 00:27:33.038982 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-28 00:27:33.039009 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-28 00:27:33.039028 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-28 00:27:33.039047 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-28 00:27:33.039067 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-28 00:27:33.039084 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-28 00:27:33.039102 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-28 00:27:33.039113 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-28 00:27:33.039124 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-28 00:27:33.039135 | orchestrator | 2026-03-28 00:27:33.039145 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-28 00:27:33.039195 | orchestrator | Saturday 28 March 2026 00:27:27 +0000 (0:00:03.885) 0:00:36.851 ******** 2026-03-28 00:27:33.039207 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:33.039218 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:33.039229 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:33.039240 | orchestrator | 2026-03-28 00:27:33.039251 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:27:33.039261 | orchestrator | 2026-03-28 00:27:33.039272 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:27:33.039283 | orchestrator | Saturday 28 March 2026 00:27:29 +0000 (0:00:01.698) 0:00:38.549 ******** 2026-03-28 00:27:33.039294 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:27:33.039305 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:27:33.039316 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:27:33.039327 | orchestrator | ok: [testbed-manager] 2026-03-28 00:27:33.039338 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:33.039348 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:33.039359 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:33.039370 | orchestrator | 2026-03-28 00:27:33.039381 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:27:33.039393 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:27:33.039404 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:27:33.039417 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:27:33.039428 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:27:33.039439 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:27:33.039451 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:27:33.039461 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:27:33.039472 | orchestrator | 2026-03-28 00:27:33.039483 | orchestrator | 2026-03-28 00:27:33.039494 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:27:33.039505 | orchestrator | Saturday 28 March 2026 00:27:33 +0000 (0:00:03.512) 0:00:42.062 ******** 2026-03-28 00:27:33.039516 | orchestrator | =============================================================================== 2026-03-28 00:27:33.039527 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.86s 2026-03-28 00:27:33.039537 | orchestrator | Install required packages (Debian) -------------------------------------- 8.81s 2026-03-28 00:27:33.039548 | orchestrator | Copy fact files --------------------------------------------------------- 3.89s 2026-03-28 00:27:33.039559 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.51s 2026-03-28 00:27:33.039570 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.70s 2026-03-28 00:27:33.039581 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2026-03-28 00:27:33.039601 | orchestrator | Copy fact file ---------------------------------------------------------- 1.27s 2026-03-28 00:27:33.305758 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.09s 2026-03-28 00:27:33.305854 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.09s 2026-03-28 00:27:33.305869 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2026-03-28 00:27:33.305880 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-03-28 00:27:33.305917 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-03-28 00:27:33.305928 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2026-03-28 00:27:33.305939 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2026-03-28 00:27:33.305951 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.17s 2026-03-28 00:27:33.305961 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-03-28 00:27:33.305973 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-03-28 00:27:33.305998 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-28 00:27:33.635458 | orchestrator | + osism apply bootstrap 2026-03-28 00:27:45.764758 | orchestrator | 2026-03-28 00:27:45 | INFO  | Task 946e13d3-3b37-4600-bc0b-b3e2a9821495 (bootstrap) was prepared for execution. 2026-03-28 00:27:45.764900 | orchestrator | 2026-03-28 00:27:45 | INFO  | It takes a moment until task 946e13d3-3b37-4600-bc0b-b3e2a9821495 (bootstrap) has been started and output is visible here. 2026-03-28 00:28:02.508429 | orchestrator | 2026-03-28 00:28:02.508551 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-28 00:28:02.508578 | orchestrator | 2026-03-28 00:28:02.508599 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-28 00:28:02.508618 | orchestrator | Saturday 28 March 2026 00:27:50 +0000 (0:00:00.153) 0:00:00.153 ******** 2026-03-28 00:28:02.508638 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:02.508658 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:02.508677 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:02.508697 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:02.508709 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:02.508719 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:02.508731 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:02.508742 | orchestrator | 2026-03-28 00:28:02.508754 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:28:02.508765 | orchestrator | 2026-03-28 00:28:02.508776 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:28:02.508787 | orchestrator | Saturday 28 March 2026 00:27:50 +0000 (0:00:00.239) 0:00:00.392 ******** 2026-03-28 00:28:02.508798 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:02.508809 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:02.508820 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:02.508830 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:02.508841 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:02.508852 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:02.508862 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:02.508873 | orchestrator | 2026-03-28 00:28:02.508884 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-28 00:28:02.508895 | orchestrator | 2026-03-28 00:28:02.508906 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:28:02.508916 | orchestrator | Saturday 28 March 2026 00:27:54 +0000 (0:00:03.831) 0:00:04.224 ******** 2026-03-28 00:28:02.508928 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-28 00:28:02.508939 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-28 00:28:02.508950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-28 00:28:02.508963 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-28 00:28:02.508976 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:28:02.508988 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-28 00:28:02.509001 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-28 00:28:02.509013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:28:02.509026 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-28 00:28:02.509063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:28:02.509076 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-28 00:28:02.509089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 00:28:02.509102 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-28 00:28:02.509114 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-28 00:28:02.509127 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 00:28:02.509171 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 00:28:02.509186 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:28:02.509199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 00:28:02.509211 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:28:02.509224 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-28 00:28:02.509238 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 00:28:02.509250 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 00:28:02.509263 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 00:28:02.509276 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 00:28:02.509289 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-28 00:28:02.509302 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 00:28:02.509315 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 00:28:02.509328 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 00:28:02.509340 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-28 00:28:02.509351 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 00:28:02.509362 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-28 00:28:02.509372 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 00:28:02.509383 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-28 00:28:02.509394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-28 00:28:02.509404 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 00:28:02.509415 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-28 00:28:02.509426 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 00:28:02.509436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:28:02.509447 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 00:28:02.509458 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:28:02.509469 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-28 00:28:02.509480 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-28 00:28:02.509490 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:28:02.509501 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 00:28:02.509512 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:28:02.509523 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 00:28:02.509551 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-28 00:28:02.509571 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:28:02.509589 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 00:28:02.509607 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:28:02.509646 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 00:28:02.509666 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 00:28:02.509685 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 00:28:02.509703 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:28:02.509728 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 00:28:02.509739 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:28:02.509749 | orchestrator | 2026-03-28 00:28:02.509760 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-28 00:28:02.509771 | orchestrator | 2026-03-28 00:28:02.509782 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-28 00:28:02.509793 | orchestrator | Saturday 28 March 2026 00:27:54 +0000 (0:00:00.470) 0:00:04.695 ******** 2026-03-28 00:28:02.509803 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:02.509814 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:02.509824 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:02.509835 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:02.509845 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:02.509856 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:02.509866 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:02.509877 | orchestrator | 2026-03-28 00:28:02.509888 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-28 00:28:02.509899 | orchestrator | Saturday 28 March 2026 00:27:55 +0000 (0:00:01.234) 0:00:05.930 ******** 2026-03-28 00:28:02.509909 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:02.509920 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:02.509930 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:02.509940 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:02.509951 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:02.509962 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:02.509972 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:02.509983 | orchestrator | 2026-03-28 00:28:02.509994 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-28 00:28:02.510004 | orchestrator | Saturday 28 March 2026 00:27:57 +0000 (0:00:01.373) 0:00:07.303 ******** 2026-03-28 00:28:02.510080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:28:02.510099 | orchestrator | 2026-03-28 00:28:02.510110 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-28 00:28:02.510121 | orchestrator | Saturday 28 March 2026 00:27:57 +0000 (0:00:00.313) 0:00:07.616 ******** 2026-03-28 00:28:02.510132 | orchestrator | changed: [testbed-manager] 2026-03-28 00:28:02.510198 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:28:02.510209 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:28:02.510220 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:02.510231 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:02.510242 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:28:02.510252 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:02.510263 | orchestrator | 2026-03-28 00:28:02.510274 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-28 00:28:02.510284 | orchestrator | Saturday 28 March 2026 00:27:59 +0000 (0:00:02.148) 0:00:09.765 ******** 2026-03-28 00:28:02.510295 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:28:02.510308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:28:02.510321 | orchestrator | 2026-03-28 00:28:02.510332 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-28 00:28:02.510343 | orchestrator | Saturday 28 March 2026 00:27:59 +0000 (0:00:00.273) 0:00:10.038 ******** 2026-03-28 00:28:02.510354 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:28:02.510364 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:28:02.510375 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:28:02.510386 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:02.510396 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:02.510407 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:02.510425 | orchestrator | 2026-03-28 00:28:02.510436 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-28 00:28:02.510447 | orchestrator | Saturday 28 March 2026 00:28:01 +0000 (0:00:01.270) 0:00:11.309 ******** 2026-03-28 00:28:02.510458 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:28:02.510468 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:02.510479 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:02.510489 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:28:02.510500 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:02.510510 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:28:02.510521 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:28:02.510532 | orchestrator | 2026-03-28 00:28:02.510542 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-28 00:28:02.510553 | orchestrator | Saturday 28 March 2026 00:28:01 +0000 (0:00:00.687) 0:00:11.996 ******** 2026-03-28 00:28:02.510564 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:28:02.510574 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:28:02.510586 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:28:02.510615 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:28:02.510634 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:28:02.510653 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:28:02.510674 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:02.510693 | orchestrator | 2026-03-28 00:28:02.510712 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-28 00:28:02.510727 | orchestrator | Saturday 28 March 2026 00:28:02 +0000 (0:00:00.447) 0:00:12.444 ******** 2026-03-28 00:28:02.510738 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:28:02.510749 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:28:02.510771 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:28:14.842207 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:28:14.842311 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:28:14.842325 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:28:14.842335 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:28:14.842344 | orchestrator | 2026-03-28 00:28:14.842354 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-28 00:28:14.842364 | orchestrator | Saturday 28 March 2026 00:28:02 +0000 (0:00:00.258) 0:00:12.702 ******** 2026-03-28 00:28:14.842375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:28:14.842400 | orchestrator | 2026-03-28 00:28:14.842409 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-28 00:28:14.842419 | orchestrator | Saturday 28 March 2026 00:28:02 +0000 (0:00:00.276) 0:00:12.979 ******** 2026-03-28 00:28:14.842428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:28:14.842437 | orchestrator | 2026-03-28 00:28:14.842446 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-28 00:28:14.842454 | orchestrator | Saturday 28 March 2026 00:28:03 +0000 (0:00:00.261) 0:00:13.241 ******** 2026-03-28 00:28:14.842463 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:14.842473 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:14.842481 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:14.842490 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:14.842499 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:14.842507 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:14.842516 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:14.842525 | orchestrator | 2026-03-28 00:28:14.842533 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-28 00:28:14.842542 | orchestrator | Saturday 28 March 2026 00:28:04 +0000 (0:00:01.544) 0:00:14.786 ******** 2026-03-28 00:28:14.842572 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:28:14.842581 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:28:14.842590 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:28:14.842599 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:28:14.842607 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:28:14.842616 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:28:14.842624 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:28:14.842633 | orchestrator | 2026-03-28 00:28:14.842643 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-28 00:28:14.842653 | orchestrator | Saturday 28 March 2026 00:28:04 +0000 (0:00:00.291) 0:00:15.077 ******** 2026-03-28 00:28:14.842663 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:14.842674 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:14.842684 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:14.842694 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:14.842703 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:14.842711 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:14.842719 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:14.842728 | orchestrator | 2026-03-28 00:28:14.842737 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-28 00:28:14.842745 | orchestrator | Saturday 28 March 2026 00:28:05 +0000 (0:00:00.537) 0:00:15.615 ******** 2026-03-28 00:28:14.842754 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:28:14.842762 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:28:14.842771 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:28:14.842780 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:28:14.842788 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:28:14.842797 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:28:14.842806 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:28:14.842814 | orchestrator | 2026-03-28 00:28:14.842823 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-28 00:28:14.842833 | orchestrator | Saturday 28 March 2026 00:28:05 +0000 (0:00:00.193) 0:00:15.808 ******** 2026-03-28 00:28:14.842842 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:14.842850 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:28:14.842859 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:28:14.842868 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:28:14.842876 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:14.842884 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:14.842893 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:14.842901 | orchestrator | 2026-03-28 00:28:14.842910 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-28 00:28:14.842919 | orchestrator | Saturday 28 March 2026 00:28:06 +0000 (0:00:00.529) 0:00:16.337 ******** 2026-03-28 00:28:14.842927 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:14.842936 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:28:14.842944 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:28:14.842953 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:14.842961 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:28:14.842970 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:14.842978 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:14.842987 | orchestrator | 2026-03-28 00:28:14.842995 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-28 00:28:14.843062 | orchestrator | Saturday 28 March 2026 00:28:07 +0000 (0:00:01.113) 0:00:17.451 ******** 2026-03-28 00:28:14.843074 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:14.843091 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:14.843100 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:14.843108 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:14.843117 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:14.843125 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:14.843159 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:14.843168 | orchestrator | 2026-03-28 00:28:14.843177 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-28 00:28:14.843195 | orchestrator | Saturday 28 March 2026 00:28:08 +0000 (0:00:01.277) 0:00:18.728 ******** 2026-03-28 00:28:14.843220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:28:14.843230 | orchestrator | 2026-03-28 00:28:14.843239 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-28 00:28:14.843248 | orchestrator | Saturday 28 March 2026 00:28:08 +0000 (0:00:00.318) 0:00:19.047 ******** 2026-03-28 00:28:14.843256 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:28:14.843265 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:28:14.843274 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:14.843282 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:28:14.843291 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:14.843300 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:28:14.843308 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:14.843317 | orchestrator | 2026-03-28 00:28:14.843325 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-28 00:28:14.843334 | orchestrator | Saturday 28 March 2026 00:28:10 +0000 (0:00:01.285) 0:00:20.332 ******** 2026-03-28 00:28:14.843343 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:14.843352 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:14.843360 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:14.843369 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:14.843377 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:14.843386 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:14.843394 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:14.843403 | orchestrator | 2026-03-28 00:28:14.843412 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-28 00:28:14.843421 | orchestrator | Saturday 28 March 2026 00:28:10 +0000 (0:00:00.220) 0:00:20.552 ******** 2026-03-28 00:28:14.843429 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:14.843438 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:14.843446 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:14.843454 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:14.843463 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:14.843471 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:14.843480 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:14.843489 | orchestrator | 2026-03-28 00:28:14.843497 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-28 00:28:14.843506 | orchestrator | Saturday 28 March 2026 00:28:10 +0000 (0:00:00.220) 0:00:20.773 ******** 2026-03-28 00:28:14.843515 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:14.843523 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:14.843532 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:14.843540 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:14.843554 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:14.843568 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:14.843583 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:14.843597 | orchestrator | 2026-03-28 00:28:14.843611 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-28 00:28:14.843625 | orchestrator | Saturday 28 March 2026 00:28:10 +0000 (0:00:00.252) 0:00:21.025 ******** 2026-03-28 00:28:14.843639 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:28:14.843652 | orchestrator | 2026-03-28 00:28:14.843665 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-28 00:28:14.843678 | orchestrator | Saturday 28 March 2026 00:28:11 +0000 (0:00:00.281) 0:00:21.306 ******** 2026-03-28 00:28:14.843693 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:14.843706 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:14.843730 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:14.843746 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:14.843761 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:14.843774 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:14.843787 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:14.843796 | orchestrator | 2026-03-28 00:28:14.843805 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-28 00:28:14.843814 | orchestrator | Saturday 28 March 2026 00:28:11 +0000 (0:00:00.524) 0:00:21.831 ******** 2026-03-28 00:28:14.843822 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:28:14.843831 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:28:14.843839 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:28:14.843848 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:28:14.843857 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:28:14.843865 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:28:14.843874 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:28:14.843882 | orchestrator | 2026-03-28 00:28:14.843891 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-28 00:28:14.843900 | orchestrator | Saturday 28 March 2026 00:28:11 +0000 (0:00:00.228) 0:00:22.059 ******** 2026-03-28 00:28:14.843908 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:14.843917 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:14.843925 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:14.843934 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:14.843943 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:14.843951 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:14.843960 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:14.843968 | orchestrator | 2026-03-28 00:28:14.843977 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-28 00:28:14.843986 | orchestrator | Saturday 28 March 2026 00:28:13 +0000 (0:00:01.138) 0:00:23.197 ******** 2026-03-28 00:28:14.843994 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:14.844003 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:14.844011 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:14.844020 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:14.844029 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:14.844037 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:14.844054 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:14.844063 | orchestrator | 2026-03-28 00:28:14.844072 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-28 00:28:14.844081 | orchestrator | Saturday 28 March 2026 00:28:13 +0000 (0:00:00.560) 0:00:23.758 ******** 2026-03-28 00:28:14.844090 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:14.844098 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:14.844107 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:14.844115 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:14.844179 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:56.484717 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:56.484822 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:56.484837 | orchestrator | 2026-03-28 00:28:56.484850 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-28 00:28:56.484863 | orchestrator | Saturday 28 March 2026 00:28:14 +0000 (0:00:01.157) 0:00:24.915 ******** 2026-03-28 00:28:56.484874 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:56.484886 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:56.484897 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:56.484909 | orchestrator | changed: [testbed-manager] 2026-03-28 00:28:56.484920 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:56.484931 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:56.484942 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:56.484954 | orchestrator | 2026-03-28 00:28:56.484965 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-28 00:28:56.484976 | orchestrator | Saturday 28 March 2026 00:28:31 +0000 (0:00:16.361) 0:00:41.277 ******** 2026-03-28 00:28:56.484988 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:56.485025 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:56.485037 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:56.485047 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:56.485058 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:56.485069 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:56.485079 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:56.485090 | orchestrator | 2026-03-28 00:28:56.485101 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-28 00:28:56.485146 | orchestrator | Saturday 28 March 2026 00:28:31 +0000 (0:00:00.346) 0:00:41.624 ******** 2026-03-28 00:28:56.485158 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:56.485169 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:56.485180 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:56.485191 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:56.485202 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:56.485213 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:56.485224 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:56.485234 | orchestrator | 2026-03-28 00:28:56.485247 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-28 00:28:56.485260 | orchestrator | Saturday 28 March 2026 00:28:31 +0000 (0:00:00.250) 0:00:41.875 ******** 2026-03-28 00:28:56.485272 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:56.485284 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:56.485298 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:56.485310 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:56.485322 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:56.485335 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:56.485348 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:56.485360 | orchestrator | 2026-03-28 00:28:56.485373 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-28 00:28:56.485386 | orchestrator | Saturday 28 March 2026 00:28:32 +0000 (0:00:00.247) 0:00:42.122 ******** 2026-03-28 00:28:56.485401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:28:56.485416 | orchestrator | 2026-03-28 00:28:56.485429 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-28 00:28:56.485442 | orchestrator | Saturday 28 March 2026 00:28:32 +0000 (0:00:00.321) 0:00:42.444 ******** 2026-03-28 00:28:56.485453 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:56.485464 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:56.485474 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:56.485485 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:56.485496 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:56.485506 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:56.485517 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:56.485528 | orchestrator | 2026-03-28 00:28:56.485538 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-28 00:28:56.485549 | orchestrator | Saturday 28 March 2026 00:28:34 +0000 (0:00:01.684) 0:00:44.128 ******** 2026-03-28 00:28:56.485560 | orchestrator | changed: [testbed-manager] 2026-03-28 00:28:56.485571 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:28:56.485582 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:28:56.485593 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:56.485603 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:28:56.485614 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:56.485625 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:56.485636 | orchestrator | 2026-03-28 00:28:56.485647 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-28 00:28:56.485657 | orchestrator | Saturday 28 March 2026 00:28:35 +0000 (0:00:01.054) 0:00:45.183 ******** 2026-03-28 00:28:56.485668 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:56.485679 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:56.485690 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:56.485708 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:56.485719 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:56.485730 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:56.485741 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:56.485752 | orchestrator | 2026-03-28 00:28:56.485763 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-28 00:28:56.485774 | orchestrator | Saturday 28 March 2026 00:28:35 +0000 (0:00:00.775) 0:00:45.959 ******** 2026-03-28 00:28:56.485785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:28:56.485798 | orchestrator | 2026-03-28 00:28:56.485824 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-28 00:28:56.485837 | orchestrator | Saturday 28 March 2026 00:28:36 +0000 (0:00:00.322) 0:00:46.281 ******** 2026-03-28 00:28:56.485847 | orchestrator | changed: [testbed-manager] 2026-03-28 00:28:56.485858 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:28:56.485869 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:28:56.485880 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:56.485891 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:56.485902 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:28:56.485913 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:56.485923 | orchestrator | 2026-03-28 00:28:56.485951 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-28 00:28:56.485963 | orchestrator | Saturday 28 March 2026 00:28:37 +0000 (0:00:01.049) 0:00:47.331 ******** 2026-03-28 00:28:56.485975 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:28:56.485986 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:28:56.485997 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:28:56.486007 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:28:56.486084 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:28:56.486097 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:28:56.486133 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:28:56.486148 | orchestrator | 2026-03-28 00:28:56.486158 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-28 00:28:56.486169 | orchestrator | Saturday 28 March 2026 00:28:37 +0000 (0:00:00.263) 0:00:47.594 ******** 2026-03-28 00:28:56.486180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:28:56.486192 | orchestrator | 2026-03-28 00:28:56.486202 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-28 00:28:56.486213 | orchestrator | Saturday 28 March 2026 00:28:37 +0000 (0:00:00.331) 0:00:47.925 ******** 2026-03-28 00:28:56.486224 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:56.486234 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:56.486245 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:56.486256 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:56.486266 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:56.486277 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:56.486288 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:56.486298 | orchestrator | 2026-03-28 00:28:56.486309 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-28 00:28:56.486320 | orchestrator | Saturday 28 March 2026 00:28:39 +0000 (0:00:01.810) 0:00:49.736 ******** 2026-03-28 00:28:56.486331 | orchestrator | changed: [testbed-manager] 2026-03-28 00:28:56.486342 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:28:56.486352 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:28:56.486363 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:56.486374 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:28:56.486385 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:56.486395 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:56.486415 | orchestrator | 2026-03-28 00:28:56.486426 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-28 00:28:56.486437 | orchestrator | Saturday 28 March 2026 00:28:40 +0000 (0:00:01.135) 0:00:50.871 ******** 2026-03-28 00:28:56.486453 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:28:56.486472 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:28:56.486489 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:56.486508 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:56.486526 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:56.486546 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:28:56.486564 | orchestrator | changed: [testbed-manager] 2026-03-28 00:28:56.486582 | orchestrator | 2026-03-28 00:28:56.486601 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-28 00:28:56.486618 | orchestrator | Saturday 28 March 2026 00:28:53 +0000 (0:00:13.184) 0:01:04.055 ******** 2026-03-28 00:28:56.486635 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:56.486653 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:56.486670 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:56.486687 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:56.486707 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:56.486725 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:56.486744 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:56.486761 | orchestrator | 2026-03-28 00:28:56.486781 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-28 00:28:56.486800 | orchestrator | Saturday 28 March 2026 00:28:54 +0000 (0:00:00.696) 0:01:04.752 ******** 2026-03-28 00:28:56.486818 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:56.486836 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:56.486854 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:56.486869 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:56.486880 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:56.486890 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:56.486901 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:56.486912 | orchestrator | 2026-03-28 00:28:56.486923 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-28 00:28:56.486934 | orchestrator | Saturday 28 March 2026 00:28:55 +0000 (0:00:00.935) 0:01:05.687 ******** 2026-03-28 00:28:56.486945 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:56.486956 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:56.486966 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:56.486977 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:56.486988 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:56.486999 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:56.487009 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:56.487020 | orchestrator | 2026-03-28 00:28:56.487031 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-28 00:28:56.487042 | orchestrator | Saturday 28 March 2026 00:28:55 +0000 (0:00:00.272) 0:01:05.959 ******** 2026-03-28 00:28:56.487053 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:56.487064 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:56.487076 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:56.487094 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:56.487138 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:56.487156 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:56.487174 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:56.487194 | orchestrator | 2026-03-28 00:28:56.487223 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-28 00:28:56.487243 | orchestrator | Saturday 28 March 2026 00:28:56 +0000 (0:00:00.259) 0:01:06.218 ******** 2026-03-28 00:28:56.487255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:28:56.487267 | orchestrator | 2026-03-28 00:28:56.487290 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-28 00:31:17.354650 | orchestrator | Saturday 28 March 2026 00:28:56 +0000 (0:00:00.343) 0:01:06.562 ******** 2026-03-28 00:31:17.354762 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:17.354778 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:31:17.354791 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:31:17.354807 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:31:17.354827 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:31:17.354849 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:31:17.354865 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:31:17.354881 | orchestrator | 2026-03-28 00:31:17.354898 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-28 00:31:17.354914 | orchestrator | Saturday 28 March 2026 00:28:58 +0000 (0:00:01.925) 0:01:08.487 ******** 2026-03-28 00:31:17.354930 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:17.354948 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:17.354964 | orchestrator | changed: [testbed-manager] 2026-03-28 00:31:17.354981 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:17.354997 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:17.355014 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:17.355031 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:17.355169 | orchestrator | 2026-03-28 00:31:17.355192 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-28 00:31:17.355211 | orchestrator | Saturday 28 March 2026 00:28:58 +0000 (0:00:00.590) 0:01:09.077 ******** 2026-03-28 00:31:17.355228 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:17.355246 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:31:17.355263 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:31:17.355281 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:31:17.355297 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:31:17.355314 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:31:17.355331 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:31:17.355347 | orchestrator | 2026-03-28 00:31:17.355366 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-28 00:31:17.355383 | orchestrator | Saturday 28 March 2026 00:28:59 +0000 (0:00:00.258) 0:01:09.336 ******** 2026-03-28 00:31:17.355400 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:17.355416 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:31:17.355433 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:31:17.355450 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:31:17.355466 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:31:17.355484 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:31:17.355501 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:31:17.355517 | orchestrator | 2026-03-28 00:31:17.355533 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-28 00:31:17.355551 | orchestrator | Saturday 28 March 2026 00:29:00 +0000 (0:00:01.219) 0:01:10.555 ******** 2026-03-28 00:31:17.355567 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:17.355584 | orchestrator | changed: [testbed-manager] 2026-03-28 00:31:17.355601 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:17.355618 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:17.355635 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:17.355651 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:17.355668 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:17.355685 | orchestrator | 2026-03-28 00:31:17.355708 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-28 00:31:17.355726 | orchestrator | Saturday 28 March 2026 00:29:02 +0000 (0:00:02.068) 0:01:12.624 ******** 2026-03-28 00:31:17.355743 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:31:17.355760 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:17.355776 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:31:17.355793 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:31:17.355810 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:31:17.355827 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:31:17.355844 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:31:17.355856 | orchestrator | 2026-03-28 00:31:17.355866 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-28 00:31:17.355901 | orchestrator | Saturday 28 March 2026 00:29:05 +0000 (0:00:02.544) 0:01:15.168 ******** 2026-03-28 00:31:17.355911 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:17.355921 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:31:17.355931 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:31:17.355940 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:31:17.355950 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:31:17.355960 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:31:17.355969 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:31:17.355978 | orchestrator | 2026-03-28 00:31:17.355988 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-28 00:31:17.355998 | orchestrator | Saturday 28 March 2026 00:29:41 +0000 (0:00:36.296) 0:01:51.465 ******** 2026-03-28 00:31:17.356008 | orchestrator | changed: [testbed-manager] 2026-03-28 00:31:17.356017 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:17.356027 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:17.356037 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:17.356083 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:17.356099 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:17.356113 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:17.356129 | orchestrator | 2026-03-28 00:31:17.356144 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-28 00:31:17.356160 | orchestrator | Saturday 28 March 2026 00:31:00 +0000 (0:01:19.095) 0:03:10.560 ******** 2026-03-28 00:31:17.356176 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:17.356192 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:31:17.356209 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:31:17.356224 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:31:17.356240 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:31:17.356256 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:31:17.356273 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:31:17.356291 | orchestrator | 2026-03-28 00:31:17.356307 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-28 00:31:17.356325 | orchestrator | Saturday 28 March 2026 00:31:02 +0000 (0:00:01.735) 0:03:12.296 ******** 2026-03-28 00:31:17.356335 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:31:17.356347 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:31:17.356363 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:31:17.356388 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:31:17.356405 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:31:17.356420 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:31:17.356437 | orchestrator | changed: [testbed-manager] 2026-03-28 00:31:17.356452 | orchestrator | 2026-03-28 00:31:17.356469 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-28 00:31:17.356485 | orchestrator | Saturday 28 March 2026 00:31:15 +0000 (0:00:12.872) 0:03:25.168 ******** 2026-03-28 00:31:17.356542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-28 00:31:17.356589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-28 00:31:17.356626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-28 00:31:17.356644 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-28 00:31:17.356660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-28 00:31:17.356677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-28 00:31:17.356694 | orchestrator | 2026-03-28 00:31:17.356711 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-28 00:31:17.356728 | orchestrator | Saturday 28 March 2026 00:31:15 +0000 (0:00:00.430) 0:03:25.598 ******** 2026-03-28 00:31:17.356745 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 00:31:17.356761 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:31:17.356779 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 00:31:17.356789 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 00:31:17.356800 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:31:17.356817 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 00:31:17.356834 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:31:17.356849 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:31:17.356865 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 00:31:17.356880 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 00:31:17.356896 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 00:31:17.356912 | orchestrator | 2026-03-28 00:31:17.356927 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-28 00:31:17.356943 | orchestrator | Saturday 28 March 2026 00:31:17 +0000 (0:00:01.740) 0:03:27.339 ******** 2026-03-28 00:31:17.356959 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 00:31:17.356987 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 00:31:17.357004 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 00:31:17.357020 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 00:31:17.357037 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 00:31:17.357096 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 00:31:24.241272 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 00:31:24.241390 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 00:31:24.241425 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 00:31:24.241435 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 00:31:24.241442 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 00:31:24.241449 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 00:31:24.241455 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 00:31:24.241462 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 00:31:24.241469 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 00:31:24.241476 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 00:31:24.241483 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:31:24.241491 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 00:31:24.241498 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 00:31:24.241505 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 00:31:24.241511 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 00:31:24.241518 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:31:24.241525 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 00:31:24.241532 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 00:31:24.241538 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 00:31:24.241545 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 00:31:24.241552 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 00:31:24.241559 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 00:31:24.241565 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 00:31:24.241572 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 00:31:24.241579 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 00:31:24.241585 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 00:31:24.241592 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 00:31:24.241599 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 00:31:24.241606 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 00:31:24.241612 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 00:31:24.241619 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 00:31:24.241625 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 00:31:24.241632 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 00:31:24.241639 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 00:31:24.241645 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 00:31:24.241658 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 00:31:24.241665 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:31:24.241672 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:31:24.241689 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-28 00:31:24.241696 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-28 00:31:24.241703 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-28 00:31:24.241710 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-28 00:31:24.241717 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-28 00:31:24.241737 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-28 00:31:24.241745 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-28 00:31:24.241751 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-28 00:31:24.241758 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-28 00:31:24.241765 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-28 00:31:24.241771 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-28 00:31:24.241778 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-28 00:31:24.241785 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-28 00:31:24.241791 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-28 00:31:24.241798 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-28 00:31:24.241805 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-28 00:31:24.241813 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-28 00:31:24.241820 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-28 00:31:24.241828 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-28 00:31:24.241836 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-28 00:31:24.241843 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-28 00:31:24.241851 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-28 00:31:24.241859 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-28 00:31:24.241867 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-28 00:31:24.241874 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-28 00:31:24.241882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-28 00:31:24.241889 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-28 00:31:24.241897 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-28 00:31:24.241905 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-28 00:31:24.241913 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-28 00:31:24.241925 | orchestrator | 2026-03-28 00:31:24.241933 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-28 00:31:24.241940 | orchestrator | Saturday 28 March 2026 00:31:22 +0000 (0:00:04.862) 0:03:32.201 ******** 2026-03-28 00:31:24.241948 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:31:24.241955 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:31:24.241962 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:31:24.241970 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:31:24.241977 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:31:24.241985 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:31:24.241992 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:31:24.242002 | orchestrator | 2026-03-28 00:31:24.242014 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-28 00:31:24.242102 | orchestrator | Saturday 28 March 2026 00:31:23 +0000 (0:00:01.534) 0:03:33.736 ******** 2026-03-28 00:31:24.242114 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:31:24.242127 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:31:24.242138 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:31:24.242149 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:31:24.242166 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:31:24.242177 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:31:24.242188 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:31:24.242199 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:31:24.242210 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:31:24.242221 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:31:24.242242 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:31:38.945895 | orchestrator | 2026-03-28 00:31:38.945999 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-28 00:31:38.946014 | orchestrator | Saturday 28 March 2026 00:31:24 +0000 (0:00:00.580) 0:03:34.317 ******** 2026-03-28 00:31:38.946131 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:31:38.946143 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:31:38.946153 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:31:38.946165 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:31:38.946175 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:31:38.946185 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:31:38.946194 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:31:38.946204 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:31:38.946214 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:31:38.946223 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:31:38.946233 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:31:38.946243 | orchestrator | 2026-03-28 00:31:38.946253 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-28 00:31:38.946288 | orchestrator | Saturday 28 March 2026 00:31:25 +0000 (0:00:01.705) 0:03:36.022 ******** 2026-03-28 00:31:38.946299 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 00:31:38.946308 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:31:38.946318 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 00:31:38.946328 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 00:31:38.946338 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:31:38.946347 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:31:38.946357 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 00:31:38.946366 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:31:38.946384 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-28 00:31:38.946401 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-28 00:31:38.946418 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-28 00:31:38.946434 | orchestrator | 2026-03-28 00:31:38.946451 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-28 00:31:38.946468 | orchestrator | Saturday 28 March 2026 00:31:26 +0000 (0:00:00.638) 0:03:36.661 ******** 2026-03-28 00:31:38.946483 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:31:38.946500 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:31:38.946518 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:31:38.946535 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:31:38.946553 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:31:38.946570 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:31:38.946589 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:31:38.946607 | orchestrator | 2026-03-28 00:31:38.946620 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-28 00:31:38.946633 | orchestrator | Saturday 28 March 2026 00:31:26 +0000 (0:00:00.328) 0:03:36.990 ******** 2026-03-28 00:31:38.946645 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:31:38.946657 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:31:38.946667 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:31:38.946676 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:31:38.946686 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:31:38.946695 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:38.946704 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:31:38.946714 | orchestrator | 2026-03-28 00:31:38.946723 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-28 00:31:38.946733 | orchestrator | Saturday 28 March 2026 00:31:32 +0000 (0:00:05.991) 0:03:42.981 ******** 2026-03-28 00:31:38.946743 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-28 00:31:38.946753 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:31:38.946762 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-28 00:31:38.946771 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:31:38.946781 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-28 00:31:38.946790 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:31:38.946800 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-28 00:31:38.946809 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-28 00:31:38.946819 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:31:38.946829 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-28 00:31:38.946855 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:31:38.946865 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:31:38.946875 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-28 00:31:38.946884 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:31:38.946894 | orchestrator | 2026-03-28 00:31:38.946913 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-28 00:31:38.946923 | orchestrator | Saturday 28 March 2026 00:31:33 +0000 (0:00:00.303) 0:03:43.285 ******** 2026-03-28 00:31:38.946932 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-28 00:31:38.946942 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-28 00:31:38.946952 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-28 00:31:38.946979 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-28 00:31:38.946990 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-28 00:31:38.946999 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-28 00:31:38.947009 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-28 00:31:38.947018 | orchestrator | 2026-03-28 00:31:38.947028 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-28 00:31:38.947072 | orchestrator | Saturday 28 March 2026 00:31:34 +0000 (0:00:01.214) 0:03:44.499 ******** 2026-03-28 00:31:38.947087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:31:38.947100 | orchestrator | 2026-03-28 00:31:38.947110 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-28 00:31:38.947119 | orchestrator | Saturday 28 March 2026 00:31:34 +0000 (0:00:00.435) 0:03:44.935 ******** 2026-03-28 00:31:38.947129 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:38.947139 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:31:38.947149 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:31:38.947158 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:31:38.947168 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:31:38.947177 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:31:38.947186 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:31:38.947196 | orchestrator | 2026-03-28 00:31:38.947206 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-28 00:31:38.947215 | orchestrator | Saturday 28 March 2026 00:31:36 +0000 (0:00:01.249) 0:03:46.184 ******** 2026-03-28 00:31:38.947225 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:31:38.947234 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:38.947244 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:31:38.947253 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:31:38.947263 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:31:38.947272 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:31:38.947282 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:31:38.947291 | orchestrator | 2026-03-28 00:31:38.947301 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-28 00:31:38.947311 | orchestrator | Saturday 28 March 2026 00:31:36 +0000 (0:00:00.635) 0:03:46.820 ******** 2026-03-28 00:31:38.947320 | orchestrator | changed: [testbed-manager] 2026-03-28 00:31:38.947330 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:38.947339 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:38.947349 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:38.947359 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:38.947368 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:38.947378 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:38.947387 | orchestrator | 2026-03-28 00:31:38.947397 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-28 00:31:38.947407 | orchestrator | Saturday 28 March 2026 00:31:37 +0000 (0:00:00.698) 0:03:47.519 ******** 2026-03-28 00:31:38.947417 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:38.947426 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:31:38.947436 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:31:38.947445 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:31:38.947455 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:31:38.947464 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:31:38.947474 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:31:38.947483 | orchestrator | 2026-03-28 00:31:38.947493 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-28 00:31:38.947511 | orchestrator | Saturday 28 March 2026 00:31:37 +0000 (0:00:00.564) 0:03:48.083 ******** 2026-03-28 00:31:38.947524 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656293.504414, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:38.947538 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656334.6003425, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:38.947561 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656339.3356462, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:38.947608 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656323.0103872, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:43.821602 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656332.59523, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:43.822525 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656330.0066278, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:43.822559 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656306.7487807, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:43.822589 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:43.822598 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:43.822616 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:43.822623 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:43.822649 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:43.822657 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:43.822665 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:31:43.822678 | orchestrator | 2026-03-28 00:31:43.822686 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-28 00:31:43.822695 | orchestrator | Saturday 28 March 2026 00:31:38 +0000 (0:00:00.941) 0:03:49.024 ******** 2026-03-28 00:31:43.822702 | orchestrator | changed: [testbed-manager] 2026-03-28 00:31:43.822710 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:43.822716 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:43.822722 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:43.822730 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:43.822736 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:43.822743 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:43.822750 | orchestrator | 2026-03-28 00:31:43.822756 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-28 00:31:43.822763 | orchestrator | Saturday 28 March 2026 00:31:40 +0000 (0:00:01.129) 0:03:50.154 ******** 2026-03-28 00:31:43.822769 | orchestrator | changed: [testbed-manager] 2026-03-28 00:31:43.822775 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:43.822781 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:43.822787 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:43.822792 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:43.822798 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:43.822804 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:43.822809 | orchestrator | 2026-03-28 00:31:43.822815 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-28 00:31:43.822821 | orchestrator | Saturday 28 March 2026 00:31:41 +0000 (0:00:01.178) 0:03:51.332 ******** 2026-03-28 00:31:43.822827 | orchestrator | changed: [testbed-manager] 2026-03-28 00:31:43.822833 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:43.822840 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:43.822846 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:43.822852 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:43.822859 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:43.822865 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:43.822872 | orchestrator | 2026-03-28 00:31:43.822878 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-28 00:31:43.822884 | orchestrator | Saturday 28 March 2026 00:31:42 +0000 (0:00:01.104) 0:03:52.436 ******** 2026-03-28 00:31:43.822891 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:31:43.822898 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:31:43.822910 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:31:43.822916 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:31:43.822923 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:31:43.822929 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:31:43.822936 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:31:43.822943 | orchestrator | 2026-03-28 00:31:43.822949 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-28 00:31:43.822956 | orchestrator | Saturday 28 March 2026 00:31:42 +0000 (0:00:00.275) 0:03:52.712 ******** 2026-03-28 00:31:43.822963 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:43.822971 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:31:43.822978 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:31:43.822985 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:31:43.822992 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:31:43.822998 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:31:43.823004 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:31:43.823010 | orchestrator | 2026-03-28 00:31:43.823017 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-28 00:31:43.823023 | orchestrator | Saturday 28 March 2026 00:31:43 +0000 (0:00:00.757) 0:03:53.470 ******** 2026-03-28 00:31:43.823056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:31:43.823072 | orchestrator | 2026-03-28 00:31:43.823079 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-28 00:31:43.823094 | orchestrator | Saturday 28 March 2026 00:31:43 +0000 (0:00:00.429) 0:03:53.900 ******** 2026-03-28 00:33:00.434359 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:00.434465 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:00.434481 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:00.434492 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:00.434502 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:00.434512 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:00.434522 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:00.434532 | orchestrator | 2026-03-28 00:33:00.434543 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-28 00:33:00.434555 | orchestrator | Saturday 28 March 2026 00:31:52 +0000 (0:00:08.367) 0:04:02.267 ******** 2026-03-28 00:33:00.434565 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:00.434574 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:00.434584 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:00.434594 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:00.434604 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:00.434614 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:00.434623 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:00.434633 | orchestrator | 2026-03-28 00:33:00.434643 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-28 00:33:00.434653 | orchestrator | Saturday 28 March 2026 00:31:53 +0000 (0:00:01.238) 0:04:03.506 ******** 2026-03-28 00:33:00.434663 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:00.434672 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:00.434682 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:00.434692 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:00.434701 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:00.434711 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:00.434720 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:00.434730 | orchestrator | 2026-03-28 00:33:00.434740 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-28 00:33:00.434750 | orchestrator | Saturday 28 March 2026 00:31:54 +0000 (0:00:01.130) 0:04:04.636 ******** 2026-03-28 00:33:00.434759 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:00.434769 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:00.434779 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:00.434788 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:00.434799 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:00.434808 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:00.434818 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:00.434828 | orchestrator | 2026-03-28 00:33:00.434838 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-28 00:33:00.434849 | orchestrator | Saturday 28 March 2026 00:31:54 +0000 (0:00:00.366) 0:04:05.002 ******** 2026-03-28 00:33:00.434858 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:00.434868 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:00.434878 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:00.434887 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:00.434897 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:00.434907 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:00.434916 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:00.434926 | orchestrator | 2026-03-28 00:33:00.434935 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-28 00:33:00.434945 | orchestrator | Saturday 28 March 2026 00:31:55 +0000 (0:00:00.354) 0:04:05.357 ******** 2026-03-28 00:33:00.434955 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:00.434965 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:00.434975 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:00.435066 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:00.435079 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:00.435089 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:00.435099 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:00.435108 | orchestrator | 2026-03-28 00:33:00.435118 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-28 00:33:00.435127 | orchestrator | Saturday 28 March 2026 00:31:55 +0000 (0:00:00.320) 0:04:05.678 ******** 2026-03-28 00:33:00.435137 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:00.435146 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:00.435156 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:00.435165 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:00.435175 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:00.435184 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:00.435194 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:00.435214 | orchestrator | 2026-03-28 00:33:00.435225 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-28 00:33:00.435234 | orchestrator | Saturday 28 March 2026 00:32:01 +0000 (0:00:05.441) 0:04:11.119 ******** 2026-03-28 00:33:00.435246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:33:00.435258 | orchestrator | 2026-03-28 00:33:00.435268 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-28 00:33:00.435278 | orchestrator | Saturday 28 March 2026 00:32:01 +0000 (0:00:00.407) 0:04:11.526 ******** 2026-03-28 00:33:00.435288 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-28 00:33:00.435298 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-28 00:33:00.435308 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-28 00:33:00.435318 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-28 00:33:00.435328 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:00.435353 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-28 00:33:00.435363 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-28 00:33:00.435373 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:00.435383 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-28 00:33:00.435393 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-28 00:33:00.435402 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:00.435412 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-28 00:33:00.435422 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-28 00:33:00.435431 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:00.435441 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-28 00:33:00.435451 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-28 00:33:00.435476 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:00.435486 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:00.435496 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-28 00:33:00.435506 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-28 00:33:00.435516 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:00.435525 | orchestrator | 2026-03-28 00:33:00.435535 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-28 00:33:00.435545 | orchestrator | Saturday 28 March 2026 00:32:01 +0000 (0:00:00.387) 0:04:11.914 ******** 2026-03-28 00:33:00.435555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:33:00.435565 | orchestrator | 2026-03-28 00:33:00.435575 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-28 00:33:00.435593 | orchestrator | Saturday 28 March 2026 00:32:02 +0000 (0:00:00.460) 0:04:12.375 ******** 2026-03-28 00:33:00.435603 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-28 00:33:00.435612 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-28 00:33:00.435622 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:00.435632 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-28 00:33:00.435641 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:00.435651 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-28 00:33:00.435661 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:00.435671 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-28 00:33:00.435681 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:00.435690 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-28 00:33:00.435700 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:00.435710 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:00.435719 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-28 00:33:00.435729 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:00.435739 | orchestrator | 2026-03-28 00:33:00.435749 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-28 00:33:00.435758 | orchestrator | Saturday 28 March 2026 00:32:02 +0000 (0:00:00.371) 0:04:12.746 ******** 2026-03-28 00:33:00.435768 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:33:00.435779 | orchestrator | 2026-03-28 00:33:00.435788 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-28 00:33:00.435798 | orchestrator | Saturday 28 March 2026 00:32:03 +0000 (0:00:00.433) 0:04:13.180 ******** 2026-03-28 00:33:00.435808 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:00.435818 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:00.435827 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:00.435837 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:00.435847 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:00.435856 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:00.435866 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:00.435876 | orchestrator | 2026-03-28 00:33:00.435886 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-28 00:33:00.435896 | orchestrator | Saturday 28 March 2026 00:32:37 +0000 (0:00:34.526) 0:04:47.707 ******** 2026-03-28 00:33:00.435905 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:00.435915 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:00.435925 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:00.435934 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:00.435944 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:00.435954 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:00.435964 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:00.435973 | orchestrator | 2026-03-28 00:33:00.435983 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-28 00:33:00.436024 | orchestrator | Saturday 28 March 2026 00:32:45 +0000 (0:00:07.965) 0:04:55.672 ******** 2026-03-28 00:33:00.436035 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:00.436045 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:00.436055 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:00.436064 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:00.436074 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:00.436084 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:00.436093 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:00.436103 | orchestrator | 2026-03-28 00:33:00.436113 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-28 00:33:00.436129 | orchestrator | Saturday 28 March 2026 00:32:52 +0000 (0:00:07.314) 0:05:02.987 ******** 2026-03-28 00:33:00.436139 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:00.436149 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:00.436158 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:00.436168 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:00.436178 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:00.436187 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:00.436196 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:00.436206 | orchestrator | 2026-03-28 00:33:00.436216 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-28 00:33:00.436226 | orchestrator | Saturday 28 March 2026 00:32:54 +0000 (0:00:01.657) 0:05:04.645 ******** 2026-03-28 00:33:00.436235 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:00.436245 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:00.436255 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:00.436264 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:00.436274 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:00.436284 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:00.436294 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:00.436303 | orchestrator | 2026-03-28 00:33:00.436319 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-28 00:33:14.281911 | orchestrator | Saturday 28 March 2026 00:33:00 +0000 (0:00:05.863) 0:05:10.508 ******** 2026-03-28 00:33:14.282128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:33:14.282151 | orchestrator | 2026-03-28 00:33:14.282163 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-28 00:33:14.282175 | orchestrator | Saturday 28 March 2026 00:33:00 +0000 (0:00:00.443) 0:05:10.952 ******** 2026-03-28 00:33:14.282186 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:14.282198 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:14.282208 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:14.282219 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:14.282230 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:14.282241 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:14.282255 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:14.282273 | orchestrator | 2026-03-28 00:33:14.282292 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-28 00:33:14.282311 | orchestrator | Saturday 28 March 2026 00:33:01 +0000 (0:00:00.822) 0:05:11.775 ******** 2026-03-28 00:33:14.282330 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:14.282350 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:14.282364 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:14.282375 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:14.282386 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:14.282397 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:14.282408 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:14.282418 | orchestrator | 2026-03-28 00:33:14.282430 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-28 00:33:14.282441 | orchestrator | Saturday 28 March 2026 00:33:04 +0000 (0:00:02.672) 0:05:14.447 ******** 2026-03-28 00:33:14.282452 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:14.282463 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:14.282474 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:14.282485 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:14.282496 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:14.282507 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:14.282518 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:14.282529 | orchestrator | 2026-03-28 00:33:14.282540 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-28 00:33:14.282551 | orchestrator | Saturday 28 March 2026 00:33:06 +0000 (0:00:01.768) 0:05:16.216 ******** 2026-03-28 00:33:14.282588 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:14.282600 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:14.282611 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:14.282621 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:14.282632 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:14.282643 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:14.282654 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:14.282665 | orchestrator | 2026-03-28 00:33:14.282676 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-28 00:33:14.282687 | orchestrator | Saturday 28 March 2026 00:33:06 +0000 (0:00:00.301) 0:05:16.517 ******** 2026-03-28 00:33:14.282697 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:14.282708 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:14.282719 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:14.282730 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:14.282740 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:14.282751 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:14.282762 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:14.282773 | orchestrator | 2026-03-28 00:33:14.282784 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-28 00:33:14.282795 | orchestrator | Saturday 28 March 2026 00:33:06 +0000 (0:00:00.435) 0:05:16.952 ******** 2026-03-28 00:33:14.282805 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:14.282816 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:14.282827 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:14.282838 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:14.282848 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:14.282859 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:14.282870 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:14.282881 | orchestrator | 2026-03-28 00:33:14.282892 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-28 00:33:14.282917 | orchestrator | Saturday 28 March 2026 00:33:07 +0000 (0:00:00.411) 0:05:17.364 ******** 2026-03-28 00:33:14.282928 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:14.282939 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:14.282950 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:14.282961 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:14.282971 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:14.283013 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:14.283030 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:14.283041 | orchestrator | 2026-03-28 00:33:14.283052 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-28 00:33:14.283064 | orchestrator | Saturday 28 March 2026 00:33:07 +0000 (0:00:00.362) 0:05:17.727 ******** 2026-03-28 00:33:14.283075 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:14.283086 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:14.283097 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:14.283107 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:14.283118 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:14.283129 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:14.283139 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:14.283150 | orchestrator | 2026-03-28 00:33:14.283161 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-28 00:33:14.283172 | orchestrator | Saturday 28 March 2026 00:33:07 +0000 (0:00:00.359) 0:05:18.086 ******** 2026-03-28 00:33:14.283183 | orchestrator | ok: [testbed-manager] =>  2026-03-28 00:33:14.283194 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:33:14.283205 | orchestrator | ok: [testbed-node-3] =>  2026-03-28 00:33:14.283215 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:33:14.283226 | orchestrator | ok: [testbed-node-4] =>  2026-03-28 00:33:14.283237 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:33:14.283248 | orchestrator | ok: [testbed-node-5] =>  2026-03-28 00:33:14.283258 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:33:14.283288 | orchestrator | ok: [testbed-node-0] =>  2026-03-28 00:33:14.283309 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:33:14.283320 | orchestrator | ok: [testbed-node-1] =>  2026-03-28 00:33:14.283331 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:33:14.283341 | orchestrator | ok: [testbed-node-2] =>  2026-03-28 00:33:14.283352 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:33:14.283363 | orchestrator | 2026-03-28 00:33:14.283374 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-28 00:33:14.283385 | orchestrator | Saturday 28 March 2026 00:33:08 +0000 (0:00:00.318) 0:05:18.405 ******** 2026-03-28 00:33:14.283396 | orchestrator | ok: [testbed-manager] =>  2026-03-28 00:33:14.283407 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:33:14.283417 | orchestrator | ok: [testbed-node-3] =>  2026-03-28 00:33:14.283428 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:33:14.283439 | orchestrator | ok: [testbed-node-4] =>  2026-03-28 00:33:14.283450 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:33:14.283461 | orchestrator | ok: [testbed-node-5] =>  2026-03-28 00:33:14.283472 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:33:14.283482 | orchestrator | ok: [testbed-node-0] =>  2026-03-28 00:33:14.283493 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:33:14.283504 | orchestrator | ok: [testbed-node-1] =>  2026-03-28 00:33:14.283515 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:33:14.283526 | orchestrator | ok: [testbed-node-2] =>  2026-03-28 00:33:14.283536 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:33:14.283547 | orchestrator | 2026-03-28 00:33:14.283559 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-28 00:33:14.283570 | orchestrator | Saturday 28 March 2026 00:33:08 +0000 (0:00:00.313) 0:05:18.718 ******** 2026-03-28 00:33:14.283581 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:14.283592 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:14.283603 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:14.283613 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:14.283624 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:14.283635 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:14.283646 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:14.283657 | orchestrator | 2026-03-28 00:33:14.283668 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-28 00:33:14.283679 | orchestrator | Saturday 28 March 2026 00:33:08 +0000 (0:00:00.308) 0:05:19.027 ******** 2026-03-28 00:33:14.283690 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:14.283701 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:14.283712 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:14.283723 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:14.283733 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:14.283744 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:14.283755 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:14.283766 | orchestrator | 2026-03-28 00:33:14.283777 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-28 00:33:14.283788 | orchestrator | Saturday 28 March 2026 00:33:09 +0000 (0:00:00.317) 0:05:19.344 ******** 2026-03-28 00:33:14.283801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:33:14.283814 | orchestrator | 2026-03-28 00:33:14.283825 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-28 00:33:14.283836 | orchestrator | Saturday 28 March 2026 00:33:09 +0000 (0:00:00.499) 0:05:19.844 ******** 2026-03-28 00:33:14.283847 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:14.283858 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:14.283869 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:14.283880 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:14.283891 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:14.283908 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:14.283919 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:14.283930 | orchestrator | 2026-03-28 00:33:14.283941 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-28 00:33:14.283952 | orchestrator | Saturday 28 March 2026 00:33:10 +0000 (0:00:01.007) 0:05:20.851 ******** 2026-03-28 00:33:14.283963 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:14.283974 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:14.284010 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:14.284022 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:14.284033 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:14.284048 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:14.284059 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:14.284070 | orchestrator | 2026-03-28 00:33:14.284081 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-28 00:33:14.284093 | orchestrator | Saturday 28 March 2026 00:33:13 +0000 (0:00:03.104) 0:05:23.955 ******** 2026-03-28 00:33:14.284104 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-28 00:33:14.284115 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-28 00:33:14.284126 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-28 00:33:14.284137 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-28 00:33:14.284148 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-28 00:33:14.284159 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-28 00:33:14.284169 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:14.284180 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-28 00:33:14.284191 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-28 00:33:14.284202 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:14.284213 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-28 00:33:14.284224 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-28 00:33:14.284234 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-28 00:33:14.284245 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-28 00:33:14.284256 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:14.284267 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-28 00:33:14.284284 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-28 00:34:13.843150 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-28 00:34:13.843246 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:13.843260 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-28 00:34:13.843270 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-28 00:34:13.843279 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-28 00:34:13.843287 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:13.843296 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:13.843305 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-28 00:34:13.843314 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-28 00:34:13.843322 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-28 00:34:13.843331 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:13.843340 | orchestrator | 2026-03-28 00:34:13.843349 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-28 00:34:13.843360 | orchestrator | Saturday 28 March 2026 00:33:14 +0000 (0:00:00.634) 0:05:24.590 ******** 2026-03-28 00:34:13.843369 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:13.843378 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:13.843386 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:13.843395 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:13.843404 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:13.843413 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:13.843421 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:13.843454 | orchestrator | 2026-03-28 00:34:13.843463 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-28 00:34:13.843472 | orchestrator | Saturday 28 March 2026 00:33:20 +0000 (0:00:06.405) 0:05:30.995 ******** 2026-03-28 00:34:13.843481 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:13.843490 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:13.843498 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:13.843507 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:13.843516 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:13.843524 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:13.843533 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:13.843541 | orchestrator | 2026-03-28 00:34:13.843550 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-28 00:34:13.843559 | orchestrator | Saturday 28 March 2026 00:33:21 +0000 (0:00:01.070) 0:05:32.066 ******** 2026-03-28 00:34:13.843567 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:13.843576 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:13.843585 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:13.843593 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:13.843602 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:13.843610 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:13.843619 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:13.843627 | orchestrator | 2026-03-28 00:34:13.843636 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-28 00:34:13.843645 | orchestrator | Saturday 28 March 2026 00:33:29 +0000 (0:00:07.981) 0:05:40.048 ******** 2026-03-28 00:34:13.843653 | orchestrator | changed: [testbed-manager] 2026-03-28 00:34:13.843662 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:13.843671 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:13.843679 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:13.843688 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:13.843698 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:13.843708 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:13.843718 | orchestrator | 2026-03-28 00:34:13.843728 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-28 00:34:13.843738 | orchestrator | Saturday 28 March 2026 00:33:33 +0000 (0:00:03.231) 0:05:43.279 ******** 2026-03-28 00:34:13.843748 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:13.843758 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:13.843768 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:13.843777 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:13.843788 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:13.843797 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:13.843807 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:13.843817 | orchestrator | 2026-03-28 00:34:13.843827 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-28 00:34:13.843837 | orchestrator | Saturday 28 March 2026 00:33:34 +0000 (0:00:01.277) 0:05:44.556 ******** 2026-03-28 00:34:13.843847 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:13.843857 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:13.843867 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:13.843877 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:13.843887 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:13.843896 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:13.843906 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:13.843916 | orchestrator | 2026-03-28 00:34:13.843961 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-28 00:34:13.843973 | orchestrator | Saturday 28 March 2026 00:33:36 +0000 (0:00:01.564) 0:05:46.121 ******** 2026-03-28 00:34:13.843982 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:13.843992 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:13.844002 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:13.844012 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:13.844032 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:13.844043 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:13.844053 | orchestrator | changed: [testbed-manager] 2026-03-28 00:34:13.844063 | orchestrator | 2026-03-28 00:34:13.844072 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-28 00:34:13.844081 | orchestrator | Saturday 28 March 2026 00:33:36 +0000 (0:00:00.596) 0:05:46.717 ******** 2026-03-28 00:34:13.844089 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:13.844098 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:13.844106 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:13.844115 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:13.844124 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:13.844132 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:13.844141 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:13.844149 | orchestrator | 2026-03-28 00:34:13.844158 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-28 00:34:13.844181 | orchestrator | Saturday 28 March 2026 00:33:46 +0000 (0:00:09.529) 0:05:56.246 ******** 2026-03-28 00:34:13.844190 | orchestrator | changed: [testbed-manager] 2026-03-28 00:34:13.844199 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:13.844207 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:13.844216 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:13.844224 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:13.844233 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:13.844241 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:13.844250 | orchestrator | 2026-03-28 00:34:13.844259 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-28 00:34:13.844268 | orchestrator | Saturday 28 March 2026 00:33:47 +0000 (0:00:00.915) 0:05:57.162 ******** 2026-03-28 00:34:13.844277 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:13.844285 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:13.844294 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:13.844302 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:13.844311 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:13.844319 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:13.844328 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:13.844337 | orchestrator | 2026-03-28 00:34:13.844345 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-28 00:34:13.844354 | orchestrator | Saturday 28 March 2026 00:33:56 +0000 (0:00:09.149) 0:06:06.311 ******** 2026-03-28 00:34:13.844363 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:13.844371 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:13.844380 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:13.844389 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:13.844397 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:13.844406 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:13.844414 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:13.844423 | orchestrator | 2026-03-28 00:34:13.844431 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-28 00:34:13.844440 | orchestrator | Saturday 28 March 2026 00:34:07 +0000 (0:00:11.336) 0:06:17.648 ******** 2026-03-28 00:34:13.844449 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-28 00:34:13.844457 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-28 00:34:13.844466 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-28 00:34:13.844474 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-28 00:34:13.844483 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-28 00:34:13.844492 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-28 00:34:13.844500 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-28 00:34:13.844509 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-28 00:34:13.844517 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-28 00:34:13.844526 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-28 00:34:13.844540 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-28 00:34:13.844590 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-28 00:34:13.844613 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-28 00:34:13.844621 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-28 00:34:13.844640 | orchestrator | 2026-03-28 00:34:13.844649 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-28 00:34:13.844658 | orchestrator | Saturday 28 March 2026 00:34:08 +0000 (0:00:01.207) 0:06:18.856 ******** 2026-03-28 00:34:13.844666 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:13.844675 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:13.844683 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:13.844692 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:13.844700 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:13.844709 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:13.844717 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:13.844726 | orchestrator | 2026-03-28 00:34:13.844735 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-28 00:34:13.844743 | orchestrator | Saturday 28 March 2026 00:34:09 +0000 (0:00:00.536) 0:06:19.392 ******** 2026-03-28 00:34:13.844752 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:13.844761 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:13.844769 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:13.844778 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:13.844786 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:13.844795 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:13.844808 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:13.844816 | orchestrator | 2026-03-28 00:34:13.844825 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-28 00:34:13.844835 | orchestrator | Saturday 28 March 2026 00:34:12 +0000 (0:00:03.571) 0:06:22.964 ******** 2026-03-28 00:34:13.844844 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:13.844852 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:13.844861 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:13.844869 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:13.844878 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:13.844886 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:13.844894 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:13.844903 | orchestrator | 2026-03-28 00:34:13.844912 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-28 00:34:13.844921 | orchestrator | Saturday 28 March 2026 00:34:13 +0000 (0:00:00.486) 0:06:23.451 ******** 2026-03-28 00:34:13.844942 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-28 00:34:13.844952 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-28 00:34:13.844960 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:13.844969 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-28 00:34:13.844978 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-28 00:34:13.844986 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:13.844995 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-28 00:34:13.845004 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-28 00:34:13.845012 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:13.845026 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-28 00:34:33.193309 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-28 00:34:33.193460 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:33.193490 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-28 00:34:33.193502 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-28 00:34:33.193514 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:33.193553 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-28 00:34:33.193565 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-28 00:34:33.193580 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:33.193599 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-28 00:34:33.193615 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-28 00:34:33.193633 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:33.193651 | orchestrator | 2026-03-28 00:34:33.193672 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-28 00:34:33.193693 | orchestrator | Saturday 28 March 2026 00:34:14 +0000 (0:00:00.749) 0:06:24.200 ******** 2026-03-28 00:34:33.193711 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:33.193730 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:33.193743 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:33.193754 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:33.193765 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:33.193775 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:33.193786 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:33.193797 | orchestrator | 2026-03-28 00:34:33.193808 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-28 00:34:33.193820 | orchestrator | Saturday 28 March 2026 00:34:14 +0000 (0:00:00.493) 0:06:24.694 ******** 2026-03-28 00:34:33.193833 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:33.193845 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:33.193857 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:33.193869 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:33.193881 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:33.193893 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:33.193932 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:33.193946 | orchestrator | 2026-03-28 00:34:33.193958 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-28 00:34:33.193970 | orchestrator | Saturday 28 March 2026 00:34:15 +0000 (0:00:00.533) 0:06:25.227 ******** 2026-03-28 00:34:33.193981 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:33.193992 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:33.194005 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:33.194101 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:33.194122 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:33.194139 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:33.194156 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:33.194174 | orchestrator | 2026-03-28 00:34:33.194194 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-28 00:34:33.194214 | orchestrator | Saturday 28 March 2026 00:34:15 +0000 (0:00:00.522) 0:06:25.750 ******** 2026-03-28 00:34:33.194231 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:33.194251 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:33.194270 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:33.194287 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:33.194298 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:33.194309 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:33.194320 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:33.194330 | orchestrator | 2026-03-28 00:34:33.194341 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-28 00:34:33.194352 | orchestrator | Saturday 28 March 2026 00:34:17 +0000 (0:00:01.842) 0:06:27.593 ******** 2026-03-28 00:34:33.194369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:34:33.194390 | orchestrator | 2026-03-28 00:34:33.194408 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-28 00:34:33.194425 | orchestrator | Saturday 28 March 2026 00:34:18 +0000 (0:00:00.887) 0:06:28.480 ******** 2026-03-28 00:34:33.194470 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:33.194491 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:33.194509 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:33.194524 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:33.194535 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:33.194546 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:33.194557 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:33.194567 | orchestrator | 2026-03-28 00:34:33.194579 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-28 00:34:33.194589 | orchestrator | Saturday 28 March 2026 00:34:19 +0000 (0:00:00.920) 0:06:29.400 ******** 2026-03-28 00:34:33.194600 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:33.194611 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:33.194622 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:33.194641 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:33.194659 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:33.194678 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:33.194696 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:33.194716 | orchestrator | 2026-03-28 00:34:33.194735 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-28 00:34:33.194752 | orchestrator | Saturday 28 March 2026 00:34:20 +0000 (0:00:00.854) 0:06:30.254 ******** 2026-03-28 00:34:33.194771 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:33.194790 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:33.194808 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:33.194819 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:33.194830 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:33.194841 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:33.194851 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:33.194862 | orchestrator | 2026-03-28 00:34:33.194873 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-28 00:34:33.194983 | orchestrator | Saturday 28 March 2026 00:34:21 +0000 (0:00:01.586) 0:06:31.841 ******** 2026-03-28 00:34:33.195005 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:33.195025 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:33.195044 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:33.195056 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:33.195067 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:33.195078 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:33.195088 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:33.195099 | orchestrator | 2026-03-28 00:34:33.195110 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-28 00:34:33.195121 | orchestrator | Saturday 28 March 2026 00:34:23 +0000 (0:00:01.401) 0:06:33.243 ******** 2026-03-28 00:34:33.195132 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:33.195142 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:33.195153 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:33.195164 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:33.195175 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:33.195185 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:33.195196 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:33.195207 | orchestrator | 2026-03-28 00:34:33.195217 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-28 00:34:33.195228 | orchestrator | Saturday 28 March 2026 00:34:24 +0000 (0:00:01.280) 0:06:34.523 ******** 2026-03-28 00:34:33.195239 | orchestrator | changed: [testbed-manager] 2026-03-28 00:34:33.195250 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:33.195260 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:33.195271 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:33.195282 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:33.195292 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:33.195303 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:33.195313 | orchestrator | 2026-03-28 00:34:33.195335 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-28 00:34:33.195346 | orchestrator | Saturday 28 March 2026 00:34:25 +0000 (0:00:01.380) 0:06:35.903 ******** 2026-03-28 00:34:33.195357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:34:33.195368 | orchestrator | 2026-03-28 00:34:33.195379 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-28 00:34:33.195390 | orchestrator | Saturday 28 March 2026 00:34:26 +0000 (0:00:01.058) 0:06:36.962 ******** 2026-03-28 00:34:33.195401 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:33.195411 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:33.195422 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:33.195433 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:33.195443 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:33.195454 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:33.195465 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:33.195475 | orchestrator | 2026-03-28 00:34:33.195486 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-28 00:34:33.195497 | orchestrator | Saturday 28 March 2026 00:34:28 +0000 (0:00:01.339) 0:06:38.302 ******** 2026-03-28 00:34:33.195508 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:33.195518 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:33.195529 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:33.195539 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:33.195550 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:33.195560 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:33.195571 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:33.195582 | orchestrator | 2026-03-28 00:34:33.195593 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-28 00:34:33.195603 | orchestrator | Saturday 28 March 2026 00:34:29 +0000 (0:00:01.189) 0:06:39.491 ******** 2026-03-28 00:34:33.195614 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:33.195625 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:33.195635 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:33.195646 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:33.195657 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:33.195671 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:33.195690 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:33.195709 | orchestrator | 2026-03-28 00:34:33.195728 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-28 00:34:33.195748 | orchestrator | Saturday 28 March 2026 00:34:30 +0000 (0:00:01.135) 0:06:40.627 ******** 2026-03-28 00:34:33.195766 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:33.195800 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:33.195812 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:33.195822 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:33.195833 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:33.195844 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:33.195855 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:33.195865 | orchestrator | 2026-03-28 00:34:33.195876 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-28 00:34:33.195887 | orchestrator | Saturday 28 March 2026 00:34:31 +0000 (0:00:01.350) 0:06:41.978 ******** 2026-03-28 00:34:33.195898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:34:33.196051 | orchestrator | 2026-03-28 00:34:33.196081 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:34:33.196092 | orchestrator | Saturday 28 March 2026 00:34:32 +0000 (0:00:00.980) 0:06:42.958 ******** 2026-03-28 00:34:33.196103 | orchestrator | 2026-03-28 00:34:33.196114 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:34:33.196137 | orchestrator | Saturday 28 March 2026 00:34:32 +0000 (0:00:00.041) 0:06:43.000 ******** 2026-03-28 00:34:33.196175 | orchestrator | 2026-03-28 00:34:33.196186 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:34:33.196197 | orchestrator | Saturday 28 March 2026 00:34:32 +0000 (0:00:00.049) 0:06:43.049 ******** 2026-03-28 00:34:33.196207 | orchestrator | 2026-03-28 00:34:33.196219 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:34:33.196245 | orchestrator | Saturday 28 March 2026 00:34:32 +0000 (0:00:00.040) 0:06:43.090 ******** 2026-03-28 00:34:59.402765 | orchestrator | 2026-03-28 00:34:59.402863 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:34:59.402948 | orchestrator | Saturday 28 March 2026 00:34:33 +0000 (0:00:00.040) 0:06:43.131 ******** 2026-03-28 00:34:59.402962 | orchestrator | 2026-03-28 00:34:59.402974 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:34:59.402985 | orchestrator | Saturday 28 March 2026 00:34:33 +0000 (0:00:00.047) 0:06:43.179 ******** 2026-03-28 00:34:59.402996 | orchestrator | 2026-03-28 00:34:59.403007 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:34:59.403017 | orchestrator | Saturday 28 March 2026 00:34:33 +0000 (0:00:00.040) 0:06:43.219 ******** 2026-03-28 00:34:59.403028 | orchestrator | 2026-03-28 00:34:59.403038 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-28 00:34:59.403049 | orchestrator | Saturday 28 March 2026 00:34:33 +0000 (0:00:00.041) 0:06:43.261 ******** 2026-03-28 00:34:59.403060 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:59.403071 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:59.403082 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:59.403093 | orchestrator | 2026-03-28 00:34:59.403104 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-28 00:34:59.403115 | orchestrator | Saturday 28 March 2026 00:34:34 +0000 (0:00:01.151) 0:06:44.413 ******** 2026-03-28 00:34:59.403126 | orchestrator | changed: [testbed-manager] 2026-03-28 00:34:59.403137 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:59.403148 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:59.403158 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:59.403169 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:59.403180 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:59.403190 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:59.403201 | orchestrator | 2026-03-28 00:34:59.403212 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-28 00:34:59.403223 | orchestrator | Saturday 28 March 2026 00:34:35 +0000 (0:00:01.603) 0:06:46.016 ******** 2026-03-28 00:34:59.403234 | orchestrator | changed: [testbed-manager] 2026-03-28 00:34:59.403245 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:59.403255 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:59.403266 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:59.403277 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:59.403287 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:59.403298 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:59.403309 | orchestrator | 2026-03-28 00:34:59.403320 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-28 00:34:59.403333 | orchestrator | Saturday 28 March 2026 00:34:37 +0000 (0:00:01.301) 0:06:47.318 ******** 2026-03-28 00:34:59.403345 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:59.403358 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:59.403370 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:59.403382 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:59.403395 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:59.403407 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:59.403420 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:59.403433 | orchestrator | 2026-03-28 00:34:59.403446 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-28 00:34:59.403458 | orchestrator | Saturday 28 March 2026 00:34:39 +0000 (0:00:02.497) 0:06:49.816 ******** 2026-03-28 00:34:59.403494 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:59.403507 | orchestrator | 2026-03-28 00:34:59.403519 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-28 00:34:59.403532 | orchestrator | Saturday 28 March 2026 00:34:39 +0000 (0:00:00.085) 0:06:49.901 ******** 2026-03-28 00:34:59.403544 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:59.403556 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:59.403567 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:59.403579 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:59.403592 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:59.403604 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:59.403616 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:59.403628 | orchestrator | 2026-03-28 00:34:59.403641 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-28 00:34:59.403653 | orchestrator | Saturday 28 March 2026 00:34:40 +0000 (0:00:01.028) 0:06:50.930 ******** 2026-03-28 00:34:59.403666 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:59.403690 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:59.403701 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:59.403712 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:59.403723 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:59.403733 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:59.403744 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:59.403754 | orchestrator | 2026-03-28 00:34:59.403765 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-28 00:34:59.403776 | orchestrator | Saturday 28 March 2026 00:34:41 +0000 (0:00:00.544) 0:06:51.474 ******** 2026-03-28 00:34:59.403788 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:34:59.403800 | orchestrator | 2026-03-28 00:34:59.403811 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-28 00:34:59.403822 | orchestrator | Saturday 28 March 2026 00:34:42 +0000 (0:00:01.141) 0:06:52.616 ******** 2026-03-28 00:34:59.403833 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:59.403843 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:59.403854 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:59.403865 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:59.403876 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:59.403903 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:59.403914 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:59.403925 | orchestrator | 2026-03-28 00:34:59.403936 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-28 00:34:59.403947 | orchestrator | Saturday 28 March 2026 00:34:43 +0000 (0:00:00.823) 0:06:53.440 ******** 2026-03-28 00:34:59.403958 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-28 00:34:59.403985 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-28 00:34:59.403997 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-28 00:34:59.404008 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-28 00:34:59.404019 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-28 00:34:59.404030 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-28 00:34:59.404041 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-28 00:34:59.404051 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-28 00:34:59.404062 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-28 00:34:59.404073 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-28 00:34:59.404084 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-28 00:34:59.404095 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-28 00:34:59.404114 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-28 00:34:59.404125 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-28 00:34:59.404136 | orchestrator | 2026-03-28 00:34:59.404147 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-28 00:34:59.404158 | orchestrator | Saturday 28 March 2026 00:34:45 +0000 (0:00:02.505) 0:06:55.945 ******** 2026-03-28 00:34:59.404169 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:59.404180 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:59.404190 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:59.404201 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:59.404212 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:59.404223 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:59.404234 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:59.404245 | orchestrator | 2026-03-28 00:34:59.404256 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-28 00:34:59.404267 | orchestrator | Saturday 28 March 2026 00:34:46 +0000 (0:00:00.867) 0:06:56.813 ******** 2026-03-28 00:34:59.404279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:34:59.404291 | orchestrator | 2026-03-28 00:34:59.404302 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-28 00:34:59.404313 | orchestrator | Saturday 28 March 2026 00:34:47 +0000 (0:00:00.837) 0:06:57.650 ******** 2026-03-28 00:34:59.404324 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:59.404334 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:59.404345 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:59.404356 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:59.404367 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:59.404377 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:59.404388 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:59.404399 | orchestrator | 2026-03-28 00:34:59.404410 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-28 00:34:59.404420 | orchestrator | Saturday 28 March 2026 00:34:48 +0000 (0:00:00.832) 0:06:58.483 ******** 2026-03-28 00:34:59.404431 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:59.404442 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:59.404452 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:59.404463 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:59.404474 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:59.404484 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:59.404495 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:59.404505 | orchestrator | 2026-03-28 00:34:59.404516 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-28 00:34:59.404527 | orchestrator | Saturday 28 March 2026 00:34:49 +0000 (0:00:01.077) 0:06:59.560 ******** 2026-03-28 00:34:59.404538 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:59.404549 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:59.404559 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:59.404570 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:59.404581 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:59.404591 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:59.404602 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:59.404612 | orchestrator | 2026-03-28 00:34:59.404623 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-28 00:34:59.404634 | orchestrator | Saturday 28 March 2026 00:34:50 +0000 (0:00:00.541) 0:07:00.102 ******** 2026-03-28 00:34:59.404645 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:59.404656 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:59.404666 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:59.404677 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:59.404687 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:59.404704 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:59.404715 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:59.404725 | orchestrator | 2026-03-28 00:34:59.404736 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-28 00:34:59.404747 | orchestrator | Saturday 28 March 2026 00:34:51 +0000 (0:00:01.487) 0:07:01.589 ******** 2026-03-28 00:34:59.404758 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:59.404769 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:59.404779 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:59.404790 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:59.404801 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:59.404811 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:59.404822 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:59.404833 | orchestrator | 2026-03-28 00:34:59.404843 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-28 00:34:59.404854 | orchestrator | Saturday 28 March 2026 00:34:52 +0000 (0:00:00.515) 0:07:02.105 ******** 2026-03-28 00:34:59.404865 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:59.404876 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:59.404901 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:59.404912 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:59.404923 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:59.404934 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:59.404951 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:30.379956 | orchestrator | 2026-03-28 00:35:30.380081 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-28 00:35:30.380099 | orchestrator | Saturday 28 March 2026 00:34:59 +0000 (0:00:07.370) 0:07:09.476 ******** 2026-03-28 00:35:30.380110 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:30.380120 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:30.380130 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:30.380139 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:30.380147 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:30.380156 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:30.380165 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:30.380174 | orchestrator | 2026-03-28 00:35:30.380183 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-28 00:35:30.380192 | orchestrator | Saturday 28 March 2026 00:35:00 +0000 (0:00:01.552) 0:07:11.028 ******** 2026-03-28 00:35:30.380200 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:30.380209 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:30.380222 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:30.380236 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:30.380251 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:30.380265 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:30.380280 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:30.380295 | orchestrator | 2026-03-28 00:35:30.380308 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-28 00:35:30.380318 | orchestrator | Saturday 28 March 2026 00:35:02 +0000 (0:00:01.678) 0:07:12.706 ******** 2026-03-28 00:35:30.380327 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:30.380335 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:30.380344 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:30.380358 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:30.380373 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:30.380388 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:30.380405 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:30.380419 | orchestrator | 2026-03-28 00:35:30.380435 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 00:35:30.380451 | orchestrator | Saturday 28 March 2026 00:35:04 +0000 (0:00:01.759) 0:07:14.465 ******** 2026-03-28 00:35:30.380463 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:30.380473 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:30.380482 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:30.380518 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:30.380529 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:30.380539 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:30.380549 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:30.380559 | orchestrator | 2026-03-28 00:35:30.380569 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 00:35:30.380579 | orchestrator | Saturday 28 March 2026 00:35:05 +0000 (0:00:00.842) 0:07:15.308 ******** 2026-03-28 00:35:30.380589 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:35:30.380599 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:35:30.380609 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:35:30.380619 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:35:30.380629 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:35:30.380638 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:35:30.380648 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:35:30.380657 | orchestrator | 2026-03-28 00:35:30.380667 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-28 00:35:30.380677 | orchestrator | Saturday 28 March 2026 00:35:06 +0000 (0:00:01.043) 0:07:16.352 ******** 2026-03-28 00:35:30.380687 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:35:30.380697 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:35:30.380707 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:35:30.380717 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:35:30.380727 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:35:30.380736 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:35:30.380747 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:35:30.380757 | orchestrator | 2026-03-28 00:35:30.380766 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-28 00:35:30.380776 | orchestrator | Saturday 28 March 2026 00:35:06 +0000 (0:00:00.530) 0:07:16.882 ******** 2026-03-28 00:35:30.380786 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:30.380813 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:30.380822 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:30.380830 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:30.380839 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:30.380848 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:30.380887 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:30.380898 | orchestrator | 2026-03-28 00:35:30.380906 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-28 00:35:30.380915 | orchestrator | Saturday 28 March 2026 00:35:07 +0000 (0:00:00.456) 0:07:17.339 ******** 2026-03-28 00:35:30.380924 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:30.380932 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:30.380941 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:30.380950 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:30.380959 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:30.380968 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:30.380976 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:30.380985 | orchestrator | 2026-03-28 00:35:30.380994 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-28 00:35:30.381002 | orchestrator | Saturday 28 March 2026 00:35:07 +0000 (0:00:00.611) 0:07:17.951 ******** 2026-03-28 00:35:30.381011 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:30.381020 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:30.381028 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:30.381041 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:30.381056 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:30.381071 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:30.381086 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:30.381101 | orchestrator | 2026-03-28 00:35:30.381116 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-28 00:35:30.381129 | orchestrator | Saturday 28 March 2026 00:35:08 +0000 (0:00:00.446) 0:07:18.398 ******** 2026-03-28 00:35:30.381138 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:30.381146 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:30.381163 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:30.381172 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:30.381181 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:30.381189 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:30.381198 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:30.381206 | orchestrator | 2026-03-28 00:35:30.381231 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-28 00:35:30.381241 | orchestrator | Saturday 28 March 2026 00:35:13 +0000 (0:00:05.322) 0:07:23.720 ******** 2026-03-28 00:35:30.381249 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:35:30.381258 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:35:30.381266 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:35:30.381275 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:35:30.381283 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:35:30.381292 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:35:30.381300 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:35:30.381308 | orchestrator | 2026-03-28 00:35:30.381317 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-28 00:35:30.381325 | orchestrator | Saturday 28 March 2026 00:35:14 +0000 (0:00:00.561) 0:07:24.281 ******** 2026-03-28 00:35:30.381336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:35:30.381347 | orchestrator | 2026-03-28 00:35:30.381356 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-28 00:35:30.381365 | orchestrator | Saturday 28 March 2026 00:35:15 +0000 (0:00:01.028) 0:07:25.310 ******** 2026-03-28 00:35:30.381373 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:30.381382 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:30.381390 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:30.381399 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:30.381407 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:30.381416 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:30.381424 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:30.381433 | orchestrator | 2026-03-28 00:35:30.381441 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-28 00:35:30.381450 | orchestrator | Saturday 28 March 2026 00:35:17 +0000 (0:00:01.850) 0:07:27.160 ******** 2026-03-28 00:35:30.381458 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:30.381467 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:30.381475 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:30.381484 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:30.381492 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:30.381501 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:30.381509 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:30.381518 | orchestrator | 2026-03-28 00:35:30.381526 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-28 00:35:30.381535 | orchestrator | Saturday 28 March 2026 00:35:18 +0000 (0:00:01.122) 0:07:28.283 ******** 2026-03-28 00:35:30.381543 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:30.381552 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:30.381560 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:30.381569 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:30.381577 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:30.381586 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:30.381594 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:30.381602 | orchestrator | 2026-03-28 00:35:30.381611 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-28 00:35:30.381620 | orchestrator | Saturday 28 March 2026 00:35:19 +0000 (0:00:00.828) 0:07:29.111 ******** 2026-03-28 00:35:30.381629 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:35:30.381638 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:35:30.381653 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:35:30.381662 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:35:30.381676 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:35:30.381685 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:35:30.381693 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:35:30.381702 | orchestrator | 2026-03-28 00:35:30.381711 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-28 00:35:30.381719 | orchestrator | Saturday 28 March 2026 00:35:20 +0000 (0:00:01.868) 0:07:30.980 ******** 2026-03-28 00:35:30.381728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:35:30.381737 | orchestrator | 2026-03-28 00:35:30.381745 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-28 00:35:30.381755 | orchestrator | Saturday 28 March 2026 00:35:21 +0000 (0:00:00.809) 0:07:31.790 ******** 2026-03-28 00:35:30.381763 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:30.381772 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:30.381780 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:30.381789 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:30.381798 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:30.381806 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:30.381814 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:30.381823 | orchestrator | 2026-03-28 00:35:30.381837 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-28 00:36:01.257148 | orchestrator | Saturday 28 March 2026 00:35:30 +0000 (0:00:08.664) 0:07:40.454 ******** 2026-03-28 00:36:01.257244 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:01.257263 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:01.257275 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:01.257287 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:01.257298 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:01.257309 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:01.257320 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:01.257331 | orchestrator | 2026-03-28 00:36:01.257342 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-28 00:36:01.257354 | orchestrator | Saturday 28 March 2026 00:35:32 +0000 (0:00:02.056) 0:07:42.510 ******** 2026-03-28 00:36:01.257365 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:01.257376 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:01.257387 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:01.257398 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:01.257409 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:01.257420 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:01.257431 | orchestrator | 2026-03-28 00:36:01.257442 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-28 00:36:01.257453 | orchestrator | Saturday 28 March 2026 00:35:33 +0000 (0:00:01.281) 0:07:43.791 ******** 2026-03-28 00:36:01.257464 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:01.257476 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:01.257487 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:01.257498 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:01.257509 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:01.257545 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:01.257558 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:01.257568 | orchestrator | 2026-03-28 00:36:01.257579 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-28 00:36:01.257591 | orchestrator | 2026-03-28 00:36:01.257602 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-28 00:36:01.257613 | orchestrator | Saturday 28 March 2026 00:35:34 +0000 (0:00:01.240) 0:07:45.031 ******** 2026-03-28 00:36:01.257623 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:01.257634 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:01.257645 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:01.257656 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:01.257667 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:01.257678 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:01.257689 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:01.257702 | orchestrator | 2026-03-28 00:36:01.257715 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-28 00:36:01.257728 | orchestrator | 2026-03-28 00:36:01.257740 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-28 00:36:01.257753 | orchestrator | Saturday 28 March 2026 00:35:35 +0000 (0:00:00.762) 0:07:45.794 ******** 2026-03-28 00:36:01.257766 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:01.257778 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:01.257791 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:01.257804 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:01.257818 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:01.257852 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:01.257866 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:01.257878 | orchestrator | 2026-03-28 00:36:01.257890 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-28 00:36:01.257902 | orchestrator | Saturday 28 March 2026 00:35:37 +0000 (0:00:01.414) 0:07:47.209 ******** 2026-03-28 00:36:01.257915 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:01.257927 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:01.257939 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:01.257952 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:01.257964 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:01.257976 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:01.257988 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:01.258001 | orchestrator | 2026-03-28 00:36:01.258013 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-28 00:36:01.258079 | orchestrator | Saturday 28 March 2026 00:35:38 +0000 (0:00:01.439) 0:07:48.648 ******** 2026-03-28 00:36:01.258090 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:01.258101 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:01.258113 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:01.258124 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:01.258135 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:01.258158 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:01.258169 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:01.258180 | orchestrator | 2026-03-28 00:36:01.258191 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-28 00:36:01.258203 | orchestrator | Saturday 28 March 2026 00:35:39 +0000 (0:00:00.522) 0:07:49.171 ******** 2026-03-28 00:36:01.258215 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:36:01.258227 | orchestrator | 2026-03-28 00:36:01.258238 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-28 00:36:01.258249 | orchestrator | Saturday 28 March 2026 00:35:40 +0000 (0:00:01.051) 0:07:50.222 ******** 2026-03-28 00:36:01.258262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:36:01.258284 | orchestrator | 2026-03-28 00:36:01.258295 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-28 00:36:01.258306 | orchestrator | Saturday 28 March 2026 00:35:40 +0000 (0:00:00.823) 0:07:51.046 ******** 2026-03-28 00:36:01.258317 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:01.258328 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:01.258339 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:01.258349 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:01.258361 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:01.258371 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:01.258382 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:01.258393 | orchestrator | 2026-03-28 00:36:01.258422 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-28 00:36:01.258434 | orchestrator | Saturday 28 March 2026 00:35:49 +0000 (0:00:08.159) 0:07:59.206 ******** 2026-03-28 00:36:01.258445 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:01.258456 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:01.258467 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:01.258478 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:01.258489 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:01.258500 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:01.258511 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:01.258522 | orchestrator | 2026-03-28 00:36:01.258533 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-28 00:36:01.258544 | orchestrator | Saturday 28 March 2026 00:35:49 +0000 (0:00:00.818) 0:08:00.024 ******** 2026-03-28 00:36:01.258555 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:01.258566 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:01.258577 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:01.258588 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:01.258598 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:01.258609 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:01.258620 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:01.258631 | orchestrator | 2026-03-28 00:36:01.258642 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-28 00:36:01.258653 | orchestrator | Saturday 28 March 2026 00:35:51 +0000 (0:00:01.285) 0:08:01.310 ******** 2026-03-28 00:36:01.258669 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:01.258688 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:01.258708 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:01.258727 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:01.258746 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:01.258764 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:01.258783 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:01.258800 | orchestrator | 2026-03-28 00:36:01.258816 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-28 00:36:01.258859 | orchestrator | Saturday 28 March 2026 00:35:53 +0000 (0:00:01.866) 0:08:03.176 ******** 2026-03-28 00:36:01.258879 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:01.258899 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:01.258918 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:01.258937 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:01.258953 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:01.258964 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:01.258975 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:01.258986 | orchestrator | 2026-03-28 00:36:01.258997 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-28 00:36:01.259008 | orchestrator | Saturday 28 March 2026 00:35:54 +0000 (0:00:01.332) 0:08:04.509 ******** 2026-03-28 00:36:01.259019 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:01.259030 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:01.259050 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:01.259061 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:01.259072 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:01.259083 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:01.259093 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:01.259104 | orchestrator | 2026-03-28 00:36:01.259115 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-28 00:36:01.259126 | orchestrator | 2026-03-28 00:36:01.259137 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-28 00:36:01.259148 | orchestrator | Saturday 28 March 2026 00:35:56 +0000 (0:00:01.905) 0:08:06.414 ******** 2026-03-28 00:36:01.259159 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:36:01.259170 | orchestrator | 2026-03-28 00:36:01.259181 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-28 00:36:01.259192 | orchestrator | Saturday 28 March 2026 00:35:57 +0000 (0:00:00.861) 0:08:07.275 ******** 2026-03-28 00:36:01.259202 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:01.259213 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:01.259224 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:01.259235 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:01.259246 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:01.259257 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:01.259274 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:01.259285 | orchestrator | 2026-03-28 00:36:01.259296 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-28 00:36:01.259307 | orchestrator | Saturday 28 March 2026 00:35:58 +0000 (0:00:01.083) 0:08:08.359 ******** 2026-03-28 00:36:01.259318 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:01.259329 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:01.259340 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:01.259351 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:01.259362 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:01.259373 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:01.259384 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:01.259394 | orchestrator | 2026-03-28 00:36:01.259405 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-28 00:36:01.259416 | orchestrator | Saturday 28 March 2026 00:35:59 +0000 (0:00:01.175) 0:08:09.534 ******** 2026-03-28 00:36:01.259427 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:36:01.259438 | orchestrator | 2026-03-28 00:36:01.259452 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-28 00:36:01.259470 | orchestrator | Saturday 28 March 2026 00:36:00 +0000 (0:00:01.016) 0:08:10.550 ******** 2026-03-28 00:36:01.259489 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:01.259509 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:01.259528 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:01.259541 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:01.259552 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:01.259565 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:01.259583 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:01.259597 | orchestrator | 2026-03-28 00:36:01.259618 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-28 00:36:02.577132 | orchestrator | Saturday 28 March 2026 00:36:01 +0000 (0:00:00.781) 0:08:11.332 ******** 2026-03-28 00:36:02.577186 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:02.577194 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:02.577199 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:02.577204 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:02.577209 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:02.577213 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:02.577218 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:02.577235 | orchestrator | 2026-03-28 00:36:02.577241 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:36:02.577246 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-28 00:36:02.577251 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-28 00:36:02.577256 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-28 00:36:02.577261 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-28 00:36:02.577265 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-28 00:36:02.577270 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-28 00:36:02.577274 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-28 00:36:02.577279 | orchestrator | 2026-03-28 00:36:02.577284 | orchestrator | 2026-03-28 00:36:02.577288 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:36:02.577293 | orchestrator | Saturday 28 March 2026 00:36:02 +0000 (0:00:00.982) 0:08:12.314 ******** 2026-03-28 00:36:02.577297 | orchestrator | =============================================================================== 2026-03-28 00:36:02.577302 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.10s 2026-03-28 00:36:02.577306 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.30s 2026-03-28 00:36:02.577311 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.53s 2026-03-28 00:36:02.577316 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.36s 2026-03-28 00:36:02.577320 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.18s 2026-03-28 00:36:02.577325 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.87s 2026-03-28 00:36:02.577330 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.34s 2026-03-28 00:36:02.577334 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.53s 2026-03-28 00:36:02.577339 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.15s 2026-03-28 00:36:02.577343 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.66s 2026-03-28 00:36:02.577348 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.37s 2026-03-28 00:36:02.577352 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.16s 2026-03-28 00:36:02.577357 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.98s 2026-03-28 00:36:02.577368 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.97s 2026-03-28 00:36:02.577373 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.37s 2026-03-28 00:36:02.577377 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.31s 2026-03-28 00:36:02.577382 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.41s 2026-03-28 00:36:02.577386 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.99s 2026-03-28 00:36:02.577391 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.86s 2026-03-28 00:36:02.577395 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.44s 2026-03-28 00:36:02.824567 | orchestrator | + osism apply fail2ban 2026-03-28 00:36:15.402375 | orchestrator | 2026-03-28 00:36:15 | INFO  | Task 76efcee2-b889-4a2f-9d12-2356f44f71f9 (fail2ban) was prepared for execution. 2026-03-28 00:36:15.402472 | orchestrator | 2026-03-28 00:36:15 | INFO  | It takes a moment until task 76efcee2-b889-4a2f-9d12-2356f44f71f9 (fail2ban) has been started and output is visible here. 2026-03-28 00:36:37.294833 | orchestrator | 2026-03-28 00:36:37.294975 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-28 00:36:37.295007 | orchestrator | 2026-03-28 00:36:37.295028 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-28 00:36:37.295047 | orchestrator | Saturday 28 March 2026 00:36:20 +0000 (0:00:00.291) 0:00:00.291 ******** 2026-03-28 00:36:37.295067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:36:37.295081 | orchestrator | 2026-03-28 00:36:37.295092 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-28 00:36:37.295103 | orchestrator | Saturday 28 March 2026 00:36:21 +0000 (0:00:01.184) 0:00:01.475 ******** 2026-03-28 00:36:37.295114 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:37.295126 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:37.295137 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:37.295147 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:37.295158 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:37.295169 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:37.295179 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:37.295191 | orchestrator | 2026-03-28 00:36:37.295202 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-28 00:36:37.295213 | orchestrator | Saturday 28 March 2026 00:36:32 +0000 (0:00:10.592) 0:00:12.068 ******** 2026-03-28 00:36:37.295224 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:37.295235 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:37.295245 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:37.295256 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:37.295267 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:37.295278 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:37.295288 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:37.295299 | orchestrator | 2026-03-28 00:36:37.295310 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-28 00:36:37.295321 | orchestrator | Saturday 28 March 2026 00:36:33 +0000 (0:00:01.632) 0:00:13.700 ******** 2026-03-28 00:36:37.295334 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:37.295348 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:37.295360 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:37.295373 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:37.295385 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:37.295397 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:37.295410 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:37.295422 | orchestrator | 2026-03-28 00:36:37.295435 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-28 00:36:37.295445 | orchestrator | Saturday 28 March 2026 00:36:35 +0000 (0:00:01.438) 0:00:15.139 ******** 2026-03-28 00:36:37.295457 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:37.295467 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:37.295478 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:37.295489 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:37.295500 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:37.295513 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:37.295532 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:37.295551 | orchestrator | 2026-03-28 00:36:37.295569 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:36:37.295587 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:36:37.295646 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:36:37.295668 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:36:37.295686 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:36:37.295701 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:36:37.295712 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:36:37.295724 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:36:37.295735 | orchestrator | 2026-03-28 00:36:37.295746 | orchestrator | 2026-03-28 00:36:37.295757 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:36:37.295768 | orchestrator | Saturday 28 March 2026 00:36:36 +0000 (0:00:01.587) 0:00:16.726 ******** 2026-03-28 00:36:37.295779 | orchestrator | =============================================================================== 2026-03-28 00:36:37.295790 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.59s 2026-03-28 00:36:37.295849 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.63s 2026-03-28 00:36:37.295860 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.59s 2026-03-28 00:36:37.295871 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.44s 2026-03-28 00:36:37.295882 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.18s 2026-03-28 00:36:37.622424 | orchestrator | + osism apply network 2026-03-28 00:36:49.701099 | orchestrator | 2026-03-28 00:36:49 | INFO  | Task 9ca1e3df-0858-4dde-95e7-e84950b2363b (network) was prepared for execution. 2026-03-28 00:36:49.701202 | orchestrator | 2026-03-28 00:36:49 | INFO  | It takes a moment until task 9ca1e3df-0858-4dde-95e7-e84950b2363b (network) has been started and output is visible here. 2026-03-28 00:37:18.622833 | orchestrator | 2026-03-28 00:37:18.622940 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-28 00:37:18.622953 | orchestrator | 2026-03-28 00:37:18.622961 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-28 00:37:18.622969 | orchestrator | Saturday 28 March 2026 00:36:53 +0000 (0:00:00.259) 0:00:00.259 ******** 2026-03-28 00:37:18.622977 | orchestrator | ok: [testbed-manager] 2026-03-28 00:37:18.622986 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:37:18.622993 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:37:18.623001 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:37:18.623008 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:37:18.623015 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:37:18.623022 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:37:18.623029 | orchestrator | 2026-03-28 00:37:18.623037 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-28 00:37:18.623044 | orchestrator | Saturday 28 March 2026 00:36:54 +0000 (0:00:00.754) 0:00:01.013 ******** 2026-03-28 00:37:18.623053 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:37:18.623061 | orchestrator | 2026-03-28 00:37:18.623068 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-28 00:37:18.623076 | orchestrator | Saturday 28 March 2026 00:36:55 +0000 (0:00:01.208) 0:00:02.222 ******** 2026-03-28 00:37:18.623105 | orchestrator | ok: [testbed-manager] 2026-03-28 00:37:18.623113 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:37:18.623119 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:37:18.623125 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:37:18.623131 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:37:18.623138 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:37:18.623145 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:37:18.623151 | orchestrator | 2026-03-28 00:37:18.623158 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-28 00:37:18.623164 | orchestrator | Saturday 28 March 2026 00:36:58 +0000 (0:00:02.061) 0:00:04.283 ******** 2026-03-28 00:37:18.623171 | orchestrator | ok: [testbed-manager] 2026-03-28 00:37:18.623178 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:37:18.623186 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:37:18.623192 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:37:18.623198 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:37:18.623204 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:37:18.623211 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:37:18.623217 | orchestrator | 2026-03-28 00:37:18.623224 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-28 00:37:18.623231 | orchestrator | Saturday 28 March 2026 00:36:59 +0000 (0:00:01.811) 0:00:06.095 ******** 2026-03-28 00:37:18.623238 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-28 00:37:18.623245 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-28 00:37:18.623252 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-28 00:37:18.623259 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-28 00:37:18.623266 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-28 00:37:18.623273 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-28 00:37:18.623281 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-28 00:37:18.623288 | orchestrator | 2026-03-28 00:37:18.623310 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-28 00:37:18.623318 | orchestrator | Saturday 28 March 2026 00:37:00 +0000 (0:00:00.992) 0:00:07.088 ******** 2026-03-28 00:37:18.623325 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 00:37:18.623333 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:37:18.623340 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 00:37:18.623347 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 00:37:18.623355 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 00:37:18.623362 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:37:18.623370 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 00:37:18.623378 | orchestrator | 2026-03-28 00:37:18.623387 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-28 00:37:18.623395 | orchestrator | Saturday 28 March 2026 00:37:04 +0000 (0:00:03.546) 0:00:10.635 ******** 2026-03-28 00:37:18.623403 | orchestrator | changed: [testbed-manager] 2026-03-28 00:37:18.623409 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:37:18.623417 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:37:18.623424 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:37:18.623432 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:37:18.623443 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:37:18.623451 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:37:18.623460 | orchestrator | 2026-03-28 00:37:18.623468 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-28 00:37:18.623475 | orchestrator | Saturday 28 March 2026 00:37:05 +0000 (0:00:01.577) 0:00:12.212 ******** 2026-03-28 00:37:18.623482 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:37:18.623489 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:37:18.623497 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 00:37:18.623504 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 00:37:18.623512 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 00:37:18.623525 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 00:37:18.623533 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 00:37:18.623541 | orchestrator | 2026-03-28 00:37:18.623548 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-28 00:37:18.623556 | orchestrator | Saturday 28 March 2026 00:37:07 +0000 (0:00:01.696) 0:00:13.909 ******** 2026-03-28 00:37:18.623563 | orchestrator | ok: [testbed-manager] 2026-03-28 00:37:18.623571 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:37:18.623578 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:37:18.623586 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:37:18.623593 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:37:18.623601 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:37:18.623608 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:37:18.623616 | orchestrator | 2026-03-28 00:37:18.623624 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-28 00:37:18.623648 | orchestrator | Saturday 28 March 2026 00:37:08 +0000 (0:00:01.173) 0:00:15.082 ******** 2026-03-28 00:37:18.623656 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:37:18.623664 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:37:18.623672 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:37:18.623678 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:37:18.623684 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:37:18.623690 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:37:18.623696 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:37:18.623702 | orchestrator | 2026-03-28 00:37:18.623708 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-28 00:37:18.623714 | orchestrator | Saturday 28 March 2026 00:37:09 +0000 (0:00:00.650) 0:00:15.732 ******** 2026-03-28 00:37:18.623720 | orchestrator | ok: [testbed-manager] 2026-03-28 00:37:18.623726 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:37:18.623732 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:37:18.623739 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:37:18.623745 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:37:18.623751 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:37:18.623782 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:37:18.623788 | orchestrator | 2026-03-28 00:37:18.623794 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-28 00:37:18.623800 | orchestrator | Saturday 28 March 2026 00:37:11 +0000 (0:00:02.279) 0:00:18.012 ******** 2026-03-28 00:37:18.623806 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:37:18.623812 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:37:18.623818 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:37:18.623824 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:37:18.623831 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:37:18.623836 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:37:18.623843 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-28 00:37:18.623851 | orchestrator | 2026-03-28 00:37:18.623858 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-28 00:37:18.623865 | orchestrator | Saturday 28 March 2026 00:37:12 +0000 (0:00:00.914) 0:00:18.926 ******** 2026-03-28 00:37:18.623871 | orchestrator | ok: [testbed-manager] 2026-03-28 00:37:18.623877 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:37:18.623884 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:37:18.623890 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:37:18.623896 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:37:18.623902 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:37:18.623908 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:37:18.623915 | orchestrator | 2026-03-28 00:37:18.623921 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-28 00:37:18.623927 | orchestrator | Saturday 28 March 2026 00:37:14 +0000 (0:00:01.630) 0:00:20.557 ******** 2026-03-28 00:37:18.623935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:37:18.623951 | orchestrator | 2026-03-28 00:37:18.623958 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-28 00:37:18.623965 | orchestrator | Saturday 28 March 2026 00:37:15 +0000 (0:00:01.233) 0:00:21.790 ******** 2026-03-28 00:37:18.623972 | orchestrator | ok: [testbed-manager] 2026-03-28 00:37:18.623979 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:37:18.623986 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:37:18.623992 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:37:18.623999 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:37:18.624006 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:37:18.624013 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:37:18.624020 | orchestrator | 2026-03-28 00:37:18.624026 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-28 00:37:18.624033 | orchestrator | Saturday 28 March 2026 00:37:16 +0000 (0:00:00.939) 0:00:22.730 ******** 2026-03-28 00:37:18.624040 | orchestrator | ok: [testbed-manager] 2026-03-28 00:37:18.624046 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:37:18.624053 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:37:18.624060 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:37:18.624066 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:37:18.624073 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:37:18.624079 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:37:18.624086 | orchestrator | 2026-03-28 00:37:18.624092 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-28 00:37:18.624098 | orchestrator | Saturday 28 March 2026 00:37:17 +0000 (0:00:00.931) 0:00:23.662 ******** 2026-03-28 00:37:18.624108 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:37:18.624116 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:37:18.624123 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:37:18.624129 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:37:18.624136 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:37:18.624143 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:37:18.624150 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:37:18.624157 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:37:18.624163 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:37:18.624171 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:37:18.624177 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:37:18.624184 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:37:18.624190 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:37:18.624196 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:37:18.624202 | orchestrator | 2026-03-28 00:37:18.624216 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-28 00:37:35.513952 | orchestrator | Saturday 28 March 2026 00:37:18 +0000 (0:00:01.206) 0:00:24.868 ******** 2026-03-28 00:37:35.514169 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:37:35.514876 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:37:35.514897 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:37:35.514909 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:37:35.514921 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:37:35.514932 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:37:35.514943 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:37:35.514954 | orchestrator | 2026-03-28 00:37:35.514967 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-28 00:37:35.515001 | orchestrator | Saturday 28 March 2026 00:37:19 +0000 (0:00:00.629) 0:00:25.497 ******** 2026-03-28 00:37:35.515015 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-5, testbed-node-3 2026-03-28 00:37:35.515028 | orchestrator | 2026-03-28 00:37:35.515039 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-28 00:37:35.515050 | orchestrator | Saturday 28 March 2026 00:37:23 +0000 (0:00:04.500) 0:00:29.998 ******** 2026-03-28 00:37:35.515062 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515101 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515161 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:35.515191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:35.515221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515260 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:35.515279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:35.515323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:35.515360 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:35.515381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:35.515400 | orchestrator | 2026-03-28 00:37:35.515419 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-28 00:37:35.515440 | orchestrator | Saturday 28 March 2026 00:37:29 +0000 (0:00:05.868) 0:00:35.867 ******** 2026-03-28 00:37:35.515460 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515539 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515551 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:35.515562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-28 00:37:35.515591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:35.515603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:35.515614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:35.515633 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:35.515660 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:41.547139 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-28 00:37:41.547251 | orchestrator | 2026-03-28 00:37:41.547267 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-28 00:37:41.547281 | orchestrator | Saturday 28 March 2026 00:37:35 +0000 (0:00:05.889) 0:00:41.756 ******** 2026-03-28 00:37:41.547294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:37:41.547306 | orchestrator | 2026-03-28 00:37:41.547317 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-28 00:37:41.547328 | orchestrator | Saturday 28 March 2026 00:37:36 +0000 (0:00:01.139) 0:00:42.896 ******** 2026-03-28 00:37:41.547339 | orchestrator | ok: [testbed-manager] 2026-03-28 00:37:41.547351 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:37:41.547361 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:37:41.547372 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:37:41.547383 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:37:41.547393 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:37:41.547404 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:37:41.547415 | orchestrator | 2026-03-28 00:37:41.547425 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-28 00:37:41.547436 | orchestrator | Saturday 28 March 2026 00:37:37 +0000 (0:00:01.109) 0:00:44.005 ******** 2026-03-28 00:37:41.547447 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:37:41.547459 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:37:41.547470 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:37:41.547481 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:37:41.547491 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:37:41.547502 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:37:41.547513 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:37:41.547523 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:37:41.547534 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:37:41.547546 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:37:41.547557 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:37:41.547568 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:37:41.547578 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:37:41.547589 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:37:41.547600 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:37:41.547632 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:37:41.547644 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:37:41.547654 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:37:41.547665 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:37:41.547678 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:37:41.547705 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:37:41.547718 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:37:41.547730 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:37:41.547780 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:37:41.547793 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:37:41.547806 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:37:41.547818 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:37:41.547831 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:37:41.547843 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:37:41.547855 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:37:41.547868 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:37:41.547880 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:37:41.547892 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:37:41.547904 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:37:41.547916 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:37:41.547928 | orchestrator | 2026-03-28 00:37:41.547941 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-28 00:37:41.547969 | orchestrator | Saturday 28 March 2026 00:37:39 +0000 (0:00:02.055) 0:00:46.061 ******** 2026-03-28 00:37:41.547982 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:37:41.547996 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:37:41.548008 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:37:41.548020 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:37:41.548032 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:37:41.548042 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:37:41.548053 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:37:41.548063 | orchestrator | 2026-03-28 00:37:41.548074 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-28 00:37:41.548085 | orchestrator | Saturday 28 March 2026 00:37:40 +0000 (0:00:00.628) 0:00:46.690 ******** 2026-03-28 00:37:41.548095 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:37:41.548106 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:37:41.548116 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:37:41.548127 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:37:41.548138 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:37:41.548149 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:37:41.548160 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:37:41.548170 | orchestrator | 2026-03-28 00:37:41.548181 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:37:41.548192 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 00:37:41.548205 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:37:41.548225 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:37:41.548236 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:37:41.548247 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:37:41.548257 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:37:41.548268 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:37:41.548279 | orchestrator | 2026-03-28 00:37:41.548289 | orchestrator | 2026-03-28 00:37:41.548300 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:37:41.548311 | orchestrator | Saturday 28 March 2026 00:37:41 +0000 (0:00:00.715) 0:00:47.405 ******** 2026-03-28 00:37:41.548322 | orchestrator | =============================================================================== 2026-03-28 00:37:41.548332 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.89s 2026-03-28 00:37:41.548343 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.87s 2026-03-28 00:37:41.548354 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.50s 2026-03-28 00:37:41.548364 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.55s 2026-03-28 00:37:41.548375 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.28s 2026-03-28 00:37:41.548385 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.06s 2026-03-28 00:37:41.548396 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.06s 2026-03-28 00:37:41.548406 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.81s 2026-03-28 00:37:41.548423 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.70s 2026-03-28 00:37:41.548434 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.63s 2026-03-28 00:37:41.548445 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.58s 2026-03-28 00:37:41.548455 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.23s 2026-03-28 00:37:41.548466 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.21s 2026-03-28 00:37:41.548477 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.21s 2026-03-28 00:37:41.548487 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.17s 2026-03-28 00:37:41.548498 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.14s 2026-03-28 00:37:41.548509 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.11s 2026-03-28 00:37:41.548519 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2026-03-28 00:37:41.548530 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.94s 2026-03-28 00:37:41.548540 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.93s 2026-03-28 00:37:41.855513 | orchestrator | + osism apply wireguard 2026-03-28 00:37:53.899403 | orchestrator | 2026-03-28 00:37:53 | INFO  | Task 448d7141-4921-4830-9c82-f6fbb5367f55 (wireguard) was prepared for execution. 2026-03-28 00:37:53.899511 | orchestrator | 2026-03-28 00:37:53 | INFO  | It takes a moment until task 448d7141-4921-4830-9c82-f6fbb5367f55 (wireguard) has been started and output is visible here. 2026-03-28 00:38:14.250487 | orchestrator | 2026-03-28 00:38:14.250576 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-28 00:38:14.250614 | orchestrator | 2026-03-28 00:38:14.250626 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-28 00:38:14.250636 | orchestrator | Saturday 28 March 2026 00:37:58 +0000 (0:00:00.233) 0:00:00.233 ******** 2026-03-28 00:38:14.250646 | orchestrator | ok: [testbed-manager] 2026-03-28 00:38:14.250656 | orchestrator | 2026-03-28 00:38:14.250666 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-28 00:38:14.250676 | orchestrator | Saturday 28 March 2026 00:37:59 +0000 (0:00:01.576) 0:00:01.810 ******** 2026-03-28 00:38:14.250686 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:14.250696 | orchestrator | 2026-03-28 00:38:14.250709 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-28 00:38:14.250762 | orchestrator | Saturday 28 March 2026 00:38:06 +0000 (0:00:06.735) 0:00:08.545 ******** 2026-03-28 00:38:14.250773 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:14.250782 | orchestrator | 2026-03-28 00:38:14.250793 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-28 00:38:14.250803 | orchestrator | Saturday 28 March 2026 00:38:07 +0000 (0:00:00.553) 0:00:09.099 ******** 2026-03-28 00:38:14.250812 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:14.250825 | orchestrator | 2026-03-28 00:38:14.250842 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-28 00:38:14.250857 | orchestrator | Saturday 28 March 2026 00:38:07 +0000 (0:00:00.444) 0:00:09.544 ******** 2026-03-28 00:38:14.250874 | orchestrator | ok: [testbed-manager] 2026-03-28 00:38:14.250891 | orchestrator | 2026-03-28 00:38:14.250908 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-28 00:38:14.250922 | orchestrator | Saturday 28 March 2026 00:38:08 +0000 (0:00:00.719) 0:00:10.263 ******** 2026-03-28 00:38:14.250932 | orchestrator | ok: [testbed-manager] 2026-03-28 00:38:14.250941 | orchestrator | 2026-03-28 00:38:14.250951 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-28 00:38:14.250960 | orchestrator | Saturday 28 March 2026 00:38:08 +0000 (0:00:00.436) 0:00:10.700 ******** 2026-03-28 00:38:14.250970 | orchestrator | ok: [testbed-manager] 2026-03-28 00:38:14.250980 | orchestrator | 2026-03-28 00:38:14.250989 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-28 00:38:14.250999 | orchestrator | Saturday 28 March 2026 00:38:09 +0000 (0:00:00.432) 0:00:11.132 ******** 2026-03-28 00:38:14.251009 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:14.251018 | orchestrator | 2026-03-28 00:38:14.251028 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-28 00:38:14.251038 | orchestrator | Saturday 28 March 2026 00:38:10 +0000 (0:00:01.198) 0:00:12.331 ******** 2026-03-28 00:38:14.251047 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:38:14.251059 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:14.251070 | orchestrator | 2026-03-28 00:38:14.251081 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-28 00:38:14.251093 | orchestrator | Saturday 28 March 2026 00:38:11 +0000 (0:00:00.968) 0:00:13.300 ******** 2026-03-28 00:38:14.251104 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:14.251115 | orchestrator | 2026-03-28 00:38:14.251126 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-28 00:38:14.251138 | orchestrator | Saturday 28 March 2026 00:38:13 +0000 (0:00:01.693) 0:00:14.994 ******** 2026-03-28 00:38:14.251149 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:14.251160 | orchestrator | 2026-03-28 00:38:14.251171 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:38:14.251183 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:38:14.251195 | orchestrator | 2026-03-28 00:38:14.251205 | orchestrator | 2026-03-28 00:38:14.251215 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:38:14.251224 | orchestrator | Saturday 28 March 2026 00:38:13 +0000 (0:00:00.928) 0:00:15.922 ******** 2026-03-28 00:38:14.251242 | orchestrator | =============================================================================== 2026-03-28 00:38:14.251252 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.74s 2026-03-28 00:38:14.251262 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2026-03-28 00:38:14.251271 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.58s 2026-03-28 00:38:14.251281 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.20s 2026-03-28 00:38:14.251291 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2026-03-28 00:38:14.251300 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.93s 2026-03-28 00:38:14.251310 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.72s 2026-03-28 00:38:14.251320 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-03-28 00:38:14.251329 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-03-28 00:38:14.251339 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.44s 2026-03-28 00:38:14.251349 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-03-28 00:38:14.480211 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-28 00:38:14.517195 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-28 00:38:14.517274 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-28 00:38:14.593066 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 196 0 --:--:-- --:--:-- --:--:-- 200 2026-03-28 00:38:14.609165 | orchestrator | + osism apply --environment custom workarounds 2026-03-28 00:38:16.336151 | orchestrator | 2026-03-28 00:38:16 | INFO  | Trying to run play workarounds in environment custom 2026-03-28 00:38:26.551543 | orchestrator | 2026-03-28 00:38:26 | INFO  | Task d7d96603-d7c9-4bd9-ae39-a809e28879b2 (workarounds) was prepared for execution. 2026-03-28 00:38:26.551679 | orchestrator | 2026-03-28 00:38:26 | INFO  | It takes a moment until task d7d96603-d7c9-4bd9-ae39-a809e28879b2 (workarounds) has been started and output is visible here. 2026-03-28 00:38:53.033305 | orchestrator | 2026-03-28 00:38:53.033416 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:38:53.033445 | orchestrator | 2026-03-28 00:38:53.033466 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-28 00:38:53.033487 | orchestrator | Saturday 28 March 2026 00:38:30 +0000 (0:00:00.132) 0:00:00.132 ******** 2026-03-28 00:38:53.033506 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-28 00:38:53.033526 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-28 00:38:53.033544 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-28 00:38:53.033563 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-28 00:38:53.033582 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-28 00:38:53.033601 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-28 00:38:53.033619 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-28 00:38:53.033638 | orchestrator | 2026-03-28 00:38:53.033657 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-28 00:38:53.033675 | orchestrator | 2026-03-28 00:38:53.033741 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-28 00:38:53.033755 | orchestrator | Saturday 28 March 2026 00:38:31 +0000 (0:00:00.828) 0:00:00.961 ******** 2026-03-28 00:38:53.033766 | orchestrator | ok: [testbed-manager] 2026-03-28 00:38:53.033778 | orchestrator | 2026-03-28 00:38:53.033812 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-28 00:38:53.033824 | orchestrator | 2026-03-28 00:38:53.033835 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-28 00:38:53.033850 | orchestrator | Saturday 28 March 2026 00:38:33 +0000 (0:00:02.421) 0:00:03.383 ******** 2026-03-28 00:38:53.033862 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:38:53.033876 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:38:53.033889 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:38:53.033902 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:38:53.033915 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:38:53.033927 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:38:53.033939 | orchestrator | 2026-03-28 00:38:53.033953 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-28 00:38:53.033966 | orchestrator | 2026-03-28 00:38:53.033978 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-28 00:38:53.033992 | orchestrator | Saturday 28 March 2026 00:38:35 +0000 (0:00:01.857) 0:00:05.241 ******** 2026-03-28 00:38:53.034005 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:38:53.034067 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:38:53.034079 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:38:53.034090 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:38:53.034101 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:38:53.034125 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:38:53.034136 | orchestrator | 2026-03-28 00:38:53.034147 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-28 00:38:53.034158 | orchestrator | Saturday 28 March 2026 00:38:37 +0000 (0:00:01.598) 0:00:06.840 ******** 2026-03-28 00:38:53.034169 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:38:53.034181 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:38:53.034193 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:38:53.034213 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:38:53.034232 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:38:53.034250 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:38:53.034268 | orchestrator | 2026-03-28 00:38:53.034285 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-28 00:38:53.034302 | orchestrator | Saturday 28 March 2026 00:38:41 +0000 (0:00:03.716) 0:00:10.556 ******** 2026-03-28 00:38:53.034320 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:38:53.034339 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:38:53.034358 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:38:53.034378 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:38:53.034397 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:38:53.034415 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:38:53.034431 | orchestrator | 2026-03-28 00:38:53.034442 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-28 00:38:53.034453 | orchestrator | 2026-03-28 00:38:53.034465 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-28 00:38:53.034476 | orchestrator | Saturday 28 March 2026 00:38:42 +0000 (0:00:00.876) 0:00:11.433 ******** 2026-03-28 00:38:53.034487 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:38:53.034498 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:38:53.034508 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:38:53.034519 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:38:53.034530 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:38:53.034541 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:38:53.034552 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:53.034574 | orchestrator | 2026-03-28 00:38:53.034586 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-28 00:38:53.034596 | orchestrator | Saturday 28 March 2026 00:38:43 +0000 (0:00:01.691) 0:00:13.125 ******** 2026-03-28 00:38:53.034607 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:38:53.034619 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:38:53.034629 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:38:53.034641 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:38:53.034651 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:38:53.034662 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:38:53.034737 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:53.034750 | orchestrator | 2026-03-28 00:38:53.034761 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-28 00:38:53.034772 | orchestrator | Saturday 28 March 2026 00:38:45 +0000 (0:00:01.740) 0:00:14.865 ******** 2026-03-28 00:38:53.034783 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:38:53.034794 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:38:53.034805 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:38:53.034816 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:38:53.034827 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:38:53.034838 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:38:53.034849 | orchestrator | ok: [testbed-manager] 2026-03-28 00:38:53.034860 | orchestrator | 2026-03-28 00:38:53.034870 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-28 00:38:53.034881 | orchestrator | Saturday 28 March 2026 00:38:47 +0000 (0:00:01.647) 0:00:16.512 ******** 2026-03-28 00:38:53.034892 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:38:53.034903 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:38:53.034914 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:38:53.034925 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:38:53.034936 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:38:53.034947 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:38:53.034958 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:53.034968 | orchestrator | 2026-03-28 00:38:53.034979 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-28 00:38:53.034990 | orchestrator | Saturday 28 March 2026 00:38:48 +0000 (0:00:01.856) 0:00:18.369 ******** 2026-03-28 00:38:53.035001 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:38:53.035012 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:38:53.035023 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:38:53.035033 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:38:53.035044 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:38:53.035055 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:38:53.035066 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:38:53.035076 | orchestrator | 2026-03-28 00:38:53.035087 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-28 00:38:53.035098 | orchestrator | 2026-03-28 00:38:53.035109 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-28 00:38:53.035120 | orchestrator | Saturday 28 March 2026 00:38:49 +0000 (0:00:00.678) 0:00:19.048 ******** 2026-03-28 00:38:53.035131 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:38:53.035142 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:38:53.035152 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:38:53.035163 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:38:53.035174 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:38:53.035184 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:38:53.035195 | orchestrator | ok: [testbed-manager] 2026-03-28 00:38:53.035206 | orchestrator | 2026-03-28 00:38:53.035217 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:38:53.035229 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:38:53.035240 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:53.035258 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:53.035276 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:53.035287 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:53.035298 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:53.035309 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:53.035319 | orchestrator | 2026-03-28 00:38:53.035330 | orchestrator | 2026-03-28 00:38:53.035341 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:38:53.035352 | orchestrator | Saturday 28 March 2026 00:38:53 +0000 (0:00:03.358) 0:00:22.406 ******** 2026-03-28 00:38:53.035363 | orchestrator | =============================================================================== 2026-03-28 00:38:53.035374 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.72s 2026-03-28 00:38:53.035384 | orchestrator | Install python3-docker -------------------------------------------------- 3.36s 2026-03-28 00:38:53.035395 | orchestrator | Apply netplan configuration --------------------------------------------- 2.42s 2026-03-28 00:38:53.035406 | orchestrator | Apply netplan configuration --------------------------------------------- 1.86s 2026-03-28 00:38:53.035417 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.86s 2026-03-28 00:38:53.035427 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.74s 2026-03-28 00:38:53.035438 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.69s 2026-03-28 00:38:53.035449 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.65s 2026-03-28 00:38:53.035460 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.60s 2026-03-28 00:38:53.035470 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.88s 2026-03-28 00:38:53.035481 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.83s 2026-03-28 00:38:53.035499 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.68s 2026-03-28 00:38:53.481851 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-28 00:39:05.039483 | orchestrator | 2026-03-28 00:39:05 | INFO  | Task c0bb5800-d1eb-4a9b-88c7-772d8457e7ce (reboot) was prepared for execution. 2026-03-28 00:39:05.039581 | orchestrator | 2026-03-28 00:39:05 | INFO  | It takes a moment until task c0bb5800-d1eb-4a9b-88c7-772d8457e7ce (reboot) has been started and output is visible here. 2026-03-28 00:39:15.040730 | orchestrator | 2026-03-28 00:39:15.040837 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:39:15.040854 | orchestrator | 2026-03-28 00:39:15.040866 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:39:15.040878 | orchestrator | Saturday 28 March 2026 00:39:09 +0000 (0:00:00.203) 0:00:00.203 ******** 2026-03-28 00:39:15.040890 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:39:15.040902 | orchestrator | 2026-03-28 00:39:15.040913 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:39:15.040924 | orchestrator | Saturday 28 March 2026 00:39:09 +0000 (0:00:00.109) 0:00:00.313 ******** 2026-03-28 00:39:15.040936 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:39:15.040946 | orchestrator | 2026-03-28 00:39:15.040958 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:39:15.040993 | orchestrator | Saturday 28 March 2026 00:39:10 +0000 (0:00:00.893) 0:00:01.207 ******** 2026-03-28 00:39:15.041005 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:39:15.041016 | orchestrator | 2026-03-28 00:39:15.041027 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:39:15.041038 | orchestrator | 2026-03-28 00:39:15.041049 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:39:15.041060 | orchestrator | Saturday 28 March 2026 00:39:10 +0000 (0:00:00.120) 0:00:01.327 ******** 2026-03-28 00:39:15.041071 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:39:15.041082 | orchestrator | 2026-03-28 00:39:15.041093 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:39:15.041104 | orchestrator | Saturday 28 March 2026 00:39:10 +0000 (0:00:00.106) 0:00:01.433 ******** 2026-03-28 00:39:15.041115 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:39:15.041126 | orchestrator | 2026-03-28 00:39:15.041137 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:39:15.041148 | orchestrator | Saturday 28 March 2026 00:39:11 +0000 (0:00:00.642) 0:00:02.076 ******** 2026-03-28 00:39:15.041159 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:39:15.041170 | orchestrator | 2026-03-28 00:39:15.041181 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:39:15.041192 | orchestrator | 2026-03-28 00:39:15.041203 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:39:15.041214 | orchestrator | Saturday 28 March 2026 00:39:11 +0000 (0:00:00.111) 0:00:02.188 ******** 2026-03-28 00:39:15.041225 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:39:15.041239 | orchestrator | 2026-03-28 00:39:15.041251 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:39:15.041264 | orchestrator | Saturday 28 March 2026 00:39:11 +0000 (0:00:00.240) 0:00:02.428 ******** 2026-03-28 00:39:15.041277 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:39:15.041290 | orchestrator | 2026-03-28 00:39:15.041302 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:39:15.041329 | orchestrator | Saturday 28 March 2026 00:39:12 +0000 (0:00:00.603) 0:00:03.032 ******** 2026-03-28 00:39:15.041343 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:39:15.041355 | orchestrator | 2026-03-28 00:39:15.041368 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:39:15.041381 | orchestrator | 2026-03-28 00:39:15.041393 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:39:15.041406 | orchestrator | Saturday 28 March 2026 00:39:12 +0000 (0:00:00.117) 0:00:03.149 ******** 2026-03-28 00:39:15.041419 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:39:15.041431 | orchestrator | 2026-03-28 00:39:15.041444 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:39:15.041457 | orchestrator | Saturday 28 March 2026 00:39:12 +0000 (0:00:00.107) 0:00:03.257 ******** 2026-03-28 00:39:15.041470 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:39:15.041483 | orchestrator | 2026-03-28 00:39:15.041494 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:39:15.041505 | orchestrator | Saturday 28 March 2026 00:39:12 +0000 (0:00:00.659) 0:00:03.916 ******** 2026-03-28 00:39:15.041516 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:39:15.041527 | orchestrator | 2026-03-28 00:39:15.041537 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:39:15.041548 | orchestrator | 2026-03-28 00:39:15.041559 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:39:15.041570 | orchestrator | Saturday 28 March 2026 00:39:13 +0000 (0:00:00.129) 0:00:04.045 ******** 2026-03-28 00:39:15.041581 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:39:15.041591 | orchestrator | 2026-03-28 00:39:15.041602 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:39:15.041613 | orchestrator | Saturday 28 March 2026 00:39:13 +0000 (0:00:00.104) 0:00:04.150 ******** 2026-03-28 00:39:15.041633 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:39:15.041644 | orchestrator | 2026-03-28 00:39:15.041655 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:39:15.041666 | orchestrator | Saturday 28 March 2026 00:39:13 +0000 (0:00:00.624) 0:00:04.775 ******** 2026-03-28 00:39:15.041695 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:39:15.041707 | orchestrator | 2026-03-28 00:39:15.041719 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:39:15.041730 | orchestrator | 2026-03-28 00:39:15.041741 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:39:15.041751 | orchestrator | Saturday 28 March 2026 00:39:13 +0000 (0:00:00.122) 0:00:04.897 ******** 2026-03-28 00:39:15.041762 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:39:15.041773 | orchestrator | 2026-03-28 00:39:15.041783 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:39:15.041794 | orchestrator | Saturday 28 March 2026 00:39:14 +0000 (0:00:00.115) 0:00:05.012 ******** 2026-03-28 00:39:15.041805 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:39:15.041816 | orchestrator | 2026-03-28 00:39:15.041827 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:39:15.041838 | orchestrator | Saturday 28 March 2026 00:39:14 +0000 (0:00:00.654) 0:00:05.666 ******** 2026-03-28 00:39:15.041864 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:39:15.041876 | orchestrator | 2026-03-28 00:39:15.041887 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:39:15.041899 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:39:15.041911 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:39:15.041922 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:39:15.041933 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:39:15.041944 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:39:15.041955 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:39:15.041966 | orchestrator | 2026-03-28 00:39:15.041977 | orchestrator | 2026-03-28 00:39:15.041988 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:39:15.041999 | orchestrator | Saturday 28 March 2026 00:39:14 +0000 (0:00:00.040) 0:00:05.707 ******** 2026-03-28 00:39:15.042010 | orchestrator | =============================================================================== 2026-03-28 00:39:15.042083 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.08s 2026-03-28 00:39:15.042095 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.78s 2026-03-28 00:39:15.042106 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2026-03-28 00:39:15.355491 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-28 00:39:27.535083 | orchestrator | 2026-03-28 00:39:27 | INFO  | Task dae79063-21a9-40a8-87dd-d2f6d71eb729 (wait-for-connection) was prepared for execution. 2026-03-28 00:39:27.535222 | orchestrator | 2026-03-28 00:39:27 | INFO  | It takes a moment until task dae79063-21a9-40a8-87dd-d2f6d71eb729 (wait-for-connection) has been started and output is visible here. 2026-03-28 00:39:43.739038 | orchestrator | 2026-03-28 00:39:43.739149 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-28 00:39:43.739194 | orchestrator | 2026-03-28 00:39:43.739207 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-28 00:39:43.739218 | orchestrator | Saturday 28 March 2026 00:39:31 +0000 (0:00:00.243) 0:00:00.243 ******** 2026-03-28 00:39:43.739229 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:39:43.739242 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:39:43.739253 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:39:43.739263 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:39:43.739274 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:39:43.739285 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:39:43.739296 | orchestrator | 2026-03-28 00:39:43.739307 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:39:43.739319 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:39:43.739332 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:39:43.739343 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:39:43.739354 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:39:43.739365 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:39:43.739376 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:39:43.739386 | orchestrator | 2026-03-28 00:39:43.739398 | orchestrator | 2026-03-28 00:39:43.739410 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:39:43.739421 | orchestrator | Saturday 28 March 2026 00:39:43 +0000 (0:00:11.551) 0:00:11.794 ******** 2026-03-28 00:39:43.739432 | orchestrator | =============================================================================== 2026-03-28 00:39:43.739442 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.55s 2026-03-28 00:39:44.054081 | orchestrator | + osism apply hddtemp 2026-03-28 00:39:56.297499 | orchestrator | 2026-03-28 00:39:56 | INFO  | Task 4f868f88-9d77-4d9f-891a-01e392d69100 (hddtemp) was prepared for execution. 2026-03-28 00:39:56.297573 | orchestrator | 2026-03-28 00:39:56 | INFO  | It takes a moment until task 4f868f88-9d77-4d9f-891a-01e392d69100 (hddtemp) has been started and output is visible here. 2026-03-28 00:40:24.439825 | orchestrator | 2026-03-28 00:40:24.439935 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-28 00:40:24.439952 | orchestrator | 2026-03-28 00:40:24.439965 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-28 00:40:24.439977 | orchestrator | Saturday 28 March 2026 00:40:00 +0000 (0:00:00.256) 0:00:00.256 ******** 2026-03-28 00:40:24.439988 | orchestrator | ok: [testbed-manager] 2026-03-28 00:40:24.440001 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:40:24.440012 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:40:24.440023 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:40:24.440034 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:40:24.440045 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:40:24.440056 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:40:24.440067 | orchestrator | 2026-03-28 00:40:24.440079 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-28 00:40:24.440089 | orchestrator | Saturday 28 March 2026 00:40:01 +0000 (0:00:00.734) 0:00:00.990 ******** 2026-03-28 00:40:24.440103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:40:24.440141 | orchestrator | 2026-03-28 00:40:24.440154 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-28 00:40:24.440165 | orchestrator | Saturday 28 March 2026 00:40:02 +0000 (0:00:01.237) 0:00:02.228 ******** 2026-03-28 00:40:24.440176 | orchestrator | ok: [testbed-manager] 2026-03-28 00:40:24.440186 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:40:24.440197 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:40:24.440208 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:40:24.440219 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:40:24.440230 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:40:24.440242 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:40:24.440253 | orchestrator | 2026-03-28 00:40:24.440263 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-28 00:40:24.440274 | orchestrator | Saturday 28 March 2026 00:40:04 +0000 (0:00:01.882) 0:00:04.111 ******** 2026-03-28 00:40:24.440285 | orchestrator | changed: [testbed-manager] 2026-03-28 00:40:24.440297 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:40:24.440308 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:40:24.440319 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:40:24.440330 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:40:24.440341 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:40:24.440352 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:40:24.440362 | orchestrator | 2026-03-28 00:40:24.440373 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-28 00:40:24.440384 | orchestrator | Saturday 28 March 2026 00:40:05 +0000 (0:00:01.183) 0:00:05.294 ******** 2026-03-28 00:40:24.440395 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:40:24.440406 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:40:24.440417 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:40:24.440427 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:40:24.440438 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:40:24.440449 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:40:24.440474 | orchestrator | ok: [testbed-manager] 2026-03-28 00:40:24.440486 | orchestrator | 2026-03-28 00:40:24.440497 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-28 00:40:24.440508 | orchestrator | Saturday 28 March 2026 00:40:06 +0000 (0:00:01.214) 0:00:06.509 ******** 2026-03-28 00:40:24.440519 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:40:24.440530 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:40:24.440541 | orchestrator | changed: [testbed-manager] 2026-03-28 00:40:24.440552 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:40:24.440562 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:40:24.440573 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:40:24.440584 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:40:24.440595 | orchestrator | 2026-03-28 00:40:24.440606 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-28 00:40:24.440616 | orchestrator | Saturday 28 March 2026 00:40:07 +0000 (0:00:00.866) 0:00:07.375 ******** 2026-03-28 00:40:24.440627 | orchestrator | changed: [testbed-manager] 2026-03-28 00:40:24.440659 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:40:24.440670 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:40:24.440681 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:40:24.440692 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:40:24.440703 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:40:24.440713 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:40:24.440724 | orchestrator | 2026-03-28 00:40:24.440735 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-28 00:40:24.440746 | orchestrator | Saturday 28 March 2026 00:40:20 +0000 (0:00:13.000) 0:00:20.375 ******** 2026-03-28 00:40:24.440758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:40:24.440769 | orchestrator | 2026-03-28 00:40:24.440789 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-28 00:40:24.440800 | orchestrator | Saturday 28 March 2026 00:40:22 +0000 (0:00:01.292) 0:00:21.667 ******** 2026-03-28 00:40:24.440811 | orchestrator | changed: [testbed-manager] 2026-03-28 00:40:24.440822 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:40:24.440833 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:40:24.440844 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:40:24.440855 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:40:24.440866 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:40:24.440877 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:40:24.440888 | orchestrator | 2026-03-28 00:40:24.440899 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:40:24.440910 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:40:24.440939 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:40:24.440951 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:40:24.440963 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:40:24.440974 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:40:24.440985 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:40:24.440996 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:40:24.441007 | orchestrator | 2026-03-28 00:40:24.441018 | orchestrator | 2026-03-28 00:40:24.441029 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:40:24.441040 | orchestrator | Saturday 28 March 2026 00:40:24 +0000 (0:00:01.991) 0:00:23.658 ******** 2026-03-28 00:40:24.441051 | orchestrator | =============================================================================== 2026-03-28 00:40:24.441062 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.00s 2026-03-28 00:40:24.441073 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.99s 2026-03-28 00:40:24.441084 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.88s 2026-03-28 00:40:24.441094 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.29s 2026-03-28 00:40:24.441105 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.24s 2026-03-28 00:40:24.441116 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.21s 2026-03-28 00:40:24.441127 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2026-03-28 00:40:24.441138 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.87s 2026-03-28 00:40:24.441149 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2026-03-28 00:40:24.751150 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-28 00:40:24.796824 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 00:40:24.796918 | orchestrator | + sudo systemctl restart manager.service 2026-03-28 00:40:38.596376 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 00:40:38.596499 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-28 00:40:38.596517 | orchestrator | + local max_attempts=60 2026-03-28 00:40:38.596549 | orchestrator | + local name=ceph-ansible 2026-03-28 00:40:38.596561 | orchestrator | + local attempt_num=1 2026-03-28 00:40:38.596573 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:38.630606 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:38.630761 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:38.630788 | orchestrator | + sleep 5 2026-03-28 00:40:43.637135 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:43.669592 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:43.669744 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:43.669760 | orchestrator | + sleep 5 2026-03-28 00:40:48.672929 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:48.712446 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:48.712527 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:48.712541 | orchestrator | + sleep 5 2026-03-28 00:40:53.716456 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:53.763152 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:53.763269 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:53.763293 | orchestrator | + sleep 5 2026-03-28 00:40:58.769324 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:58.818981 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:58.819082 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:58.819099 | orchestrator | + sleep 5 2026-03-28 00:41:03.823820 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:41:03.864407 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:41:03.864490 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:41:03.864501 | orchestrator | + sleep 5 2026-03-28 00:41:08.870860 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:41:08.914272 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:41:08.914379 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:41:08.914389 | orchestrator | + sleep 5 2026-03-28 00:41:13.918281 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:41:14.002888 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:41:14.002975 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:41:14.003202 | orchestrator | + sleep 5 2026-03-28 00:41:19.006132 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:41:19.047434 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:41:19.047526 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:41:19.047541 | orchestrator | + sleep 5 2026-03-28 00:41:24.050866 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:41:24.088254 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:41:24.088337 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:41:24.088347 | orchestrator | + sleep 5 2026-03-28 00:41:29.093797 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:41:29.135847 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:41:29.135938 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:41:29.135953 | orchestrator | + sleep 5 2026-03-28 00:41:34.141130 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:41:34.184824 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:41:34.184913 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:41:34.184924 | orchestrator | + sleep 5 2026-03-28 00:41:39.189165 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:41:39.232274 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:41:39.232388 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:41:39.232404 | orchestrator | + sleep 5 2026-03-28 00:41:44.238096 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:41:44.283960 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:41:44.284182 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-28 00:41:44.284217 | orchestrator | + local max_attempts=60 2026-03-28 00:41:44.284237 | orchestrator | + local name=kolla-ansible 2026-03-28 00:41:44.284253 | orchestrator | + local attempt_num=1 2026-03-28 00:41:44.284278 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-28 00:41:44.323466 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:41:44.323547 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-28 00:41:44.323559 | orchestrator | + local max_attempts=60 2026-03-28 00:41:44.323621 | orchestrator | + local name=osism-ansible 2026-03-28 00:41:44.323632 | orchestrator | + local attempt_num=1 2026-03-28 00:41:44.323651 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-28 00:41:44.368260 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:41:44.368353 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-28 00:41:44.368367 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-28 00:41:44.555333 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-28 00:41:44.735180 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-28 00:41:44.909067 | orchestrator | ARA in osism-ansible already disabled. 2026-03-28 00:41:45.059536 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-28 00:41:45.060647 | orchestrator | + osism apply gather-facts 2026-03-28 00:41:57.294100 | orchestrator | 2026-03-28 00:41:57 | INFO  | Task 826601fb-b98d-439e-9113-5ae43768fc98 (gather-facts) was prepared for execution. 2026-03-28 00:41:57.294185 | orchestrator | 2026-03-28 00:41:57 | INFO  | It takes a moment until task 826601fb-b98d-439e-9113-5ae43768fc98 (gather-facts) has been started and output is visible here. 2026-03-28 00:42:10.635147 | orchestrator | 2026-03-28 00:42:10.635284 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:42:10.635315 | orchestrator | 2026-03-28 00:42:10.635337 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:42:10.635357 | orchestrator | Saturday 28 March 2026 00:42:01 +0000 (0:00:00.227) 0:00:00.228 ******** 2026-03-28 00:42:10.635377 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:42:10.635400 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:42:10.635421 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:42:10.635442 | orchestrator | ok: [testbed-manager] 2026-03-28 00:42:10.635462 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:10.635482 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:42:10.635501 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:42:10.635522 | orchestrator | 2026-03-28 00:42:10.635541 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 00:42:10.635561 | orchestrator | 2026-03-28 00:42:10.635616 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 00:42:10.635636 | orchestrator | Saturday 28 March 2026 00:42:09 +0000 (0:00:08.179) 0:00:08.407 ******** 2026-03-28 00:42:10.635657 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:42:10.635678 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:42:10.635699 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:42:10.635720 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:42:10.635900 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:10.635920 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:10.635939 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:42:10.635959 | orchestrator | 2026-03-28 00:42:10.635980 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:42:10.636001 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:42:10.636023 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:42:10.636043 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:42:10.636063 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:42:10.636083 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:42:10.636103 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:42:10.636122 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:42:10.636181 | orchestrator | 2026-03-28 00:42:10.636202 | orchestrator | 2026-03-28 00:42:10.636222 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:42:10.636242 | orchestrator | Saturday 28 March 2026 00:42:10 +0000 (0:00:00.598) 0:00:09.005 ******** 2026-03-28 00:42:10.636262 | orchestrator | =============================================================================== 2026-03-28 00:42:10.636281 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.18s 2026-03-28 00:42:10.636299 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2026-03-28 00:42:11.035694 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-28 00:42:11.047359 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-28 00:42:11.060557 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-28 00:42:11.073113 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-28 00:42:11.090099 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-28 00:42:11.110528 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-28 00:42:11.129770 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-28 00:42:11.146485 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-28 00:42:11.168443 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-28 00:42:11.187788 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-28 00:42:11.208196 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-28 00:42:11.232197 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-28 00:42:11.250518 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-28 00:42:11.268700 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-28 00:42:11.289219 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-28 00:42:11.306670 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-28 00:42:11.323518 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-28 00:42:11.342324 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-28 00:42:11.363741 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-28 00:42:11.377629 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-28 00:42:11.396492 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-28 00:42:11.421828 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-28 00:42:11.442763 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-28 00:42:11.466991 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-28 00:42:11.602752 | orchestrator | ok: Runtime: 0:24:28.724152 2026-03-28 00:42:11.719869 | 2026-03-28 00:42:11.720018 | TASK [Deploy services] 2026-03-28 00:42:12.255051 | orchestrator | skipping: Conditional result was False 2026-03-28 00:42:12.274696 | 2026-03-28 00:42:12.274933 | TASK [Deploy in a nutshell] 2026-03-28 00:42:13.011017 | orchestrator | 2026-03-28 00:42:13.011158 | orchestrator | # PULL IMAGES 2026-03-28 00:42:13.011168 | orchestrator | 2026-03-28 00:42:13.011173 | orchestrator | + set -e 2026-03-28 00:42:13.011181 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 00:42:13.011190 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 00:42:13.011197 | orchestrator | ++ INTERACTIVE=false 2026-03-28 00:42:13.011232 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 00:42:13.011243 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 00:42:13.011249 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 00:42:13.011254 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 00:42:13.011262 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 00:42:13.011266 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 00:42:13.011274 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 00:42:13.011279 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 00:42:13.011287 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 00:42:13.011291 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 00:42:13.011299 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 00:42:13.011303 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 00:42:13.011309 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 00:42:13.011313 | orchestrator | ++ export ARA=false 2026-03-28 00:42:13.011317 | orchestrator | ++ ARA=false 2026-03-28 00:42:13.011322 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 00:42:13.011326 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 00:42:13.011331 | orchestrator | ++ export TEMPEST=true 2026-03-28 00:42:13.011335 | orchestrator | ++ TEMPEST=true 2026-03-28 00:42:13.011339 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 00:42:13.011344 | orchestrator | ++ IS_ZUUL=true 2026-03-28 00:42:13.011348 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.253 2026-03-28 00:42:13.011353 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.253 2026-03-28 00:42:13.011357 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 00:42:13.011361 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 00:42:13.011365 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 00:42:13.011370 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 00:42:13.011374 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 00:42:13.011379 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 00:42:13.011383 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 00:42:13.011392 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 00:42:13.011396 | orchestrator | + echo 2026-03-28 00:42:13.011401 | orchestrator | + echo '# PULL IMAGES' 2026-03-28 00:42:13.011405 | orchestrator | + echo 2026-03-28 00:42:13.011418 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-28 00:42:13.067135 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 00:42:13.067250 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-28 00:42:15.059881 | orchestrator | 2026-03-28 00:42:15 | INFO  | Trying to run play pull-images in environment custom 2026-03-28 00:42:25.272439 | orchestrator | 2026-03-28 00:42:25 | INFO  | Task aa6a193e-540f-432e-a704-630ec152dbe7 (pull-images) was prepared for execution. 2026-03-28 00:42:25.272551 | orchestrator | 2026-03-28 00:42:25 | INFO  | Task aa6a193e-540f-432e-a704-630ec152dbe7 is running in background. No more output. Check ARA for logs. 2026-03-28 00:42:27.282409 | orchestrator | 2026-03-28 00:42:27 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-28 00:42:37.430179 | orchestrator | 2026-03-28 00:42:37 | INFO  | Task d04c7c44-1848-427f-a5fd-a7362f984ace (wipe-partitions) was prepared for execution. 2026-03-28 00:42:37.430259 | orchestrator | 2026-03-28 00:42:37 | INFO  | It takes a moment until task d04c7c44-1848-427f-a5fd-a7362f984ace (wipe-partitions) has been started and output is visible here. 2026-03-28 00:42:49.714481 | orchestrator | 2026-03-28 00:42:49.714657 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-28 00:42:49.714678 | orchestrator | 2026-03-28 00:42:49.714691 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-28 00:42:49.714711 | orchestrator | Saturday 28 March 2026 00:42:41 +0000 (0:00:00.147) 0:00:00.147 ******** 2026-03-28 00:42:49.714722 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:42:49.714734 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:42:49.714746 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:42:49.714758 | orchestrator | 2026-03-28 00:42:49.714769 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-28 00:42:49.714810 | orchestrator | Saturday 28 March 2026 00:42:42 +0000 (0:00:00.590) 0:00:00.737 ******** 2026-03-28 00:42:49.714822 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:49.714833 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:49.714844 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:42:49.714859 | orchestrator | 2026-03-28 00:42:49.714871 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-28 00:42:49.714882 | orchestrator | Saturday 28 March 2026 00:42:42 +0000 (0:00:00.385) 0:00:01.123 ******** 2026-03-28 00:42:49.714893 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:42:49.714904 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:49.714915 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:42:49.714926 | orchestrator | 2026-03-28 00:42:49.714937 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-28 00:42:49.714948 | orchestrator | Saturday 28 March 2026 00:42:43 +0000 (0:00:00.644) 0:00:01.767 ******** 2026-03-28 00:42:49.714959 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:49.714970 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:49.714981 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:42:49.714991 | orchestrator | 2026-03-28 00:42:49.715002 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-28 00:42:49.715013 | orchestrator | Saturday 28 March 2026 00:42:43 +0000 (0:00:00.245) 0:00:02.012 ******** 2026-03-28 00:42:49.715025 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-28 00:42:49.715039 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-28 00:42:49.715050 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-28 00:42:49.715061 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-28 00:42:49.715072 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-28 00:42:49.715083 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-28 00:42:49.715093 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-28 00:42:49.715104 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-28 00:42:49.715115 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-28 00:42:49.715126 | orchestrator | 2026-03-28 00:42:49.715137 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-28 00:42:49.715148 | orchestrator | Saturday 28 March 2026 00:42:44 +0000 (0:00:01.188) 0:00:03.201 ******** 2026-03-28 00:42:49.715160 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-28 00:42:49.715171 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-28 00:42:49.715182 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-28 00:42:49.715192 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-28 00:42:49.715203 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-28 00:42:49.715214 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-28 00:42:49.715225 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-28 00:42:49.715235 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-28 00:42:49.715246 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-28 00:42:49.715257 | orchestrator | 2026-03-28 00:42:49.715268 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-28 00:42:49.715279 | orchestrator | Saturday 28 March 2026 00:42:46 +0000 (0:00:01.565) 0:00:04.767 ******** 2026-03-28 00:42:49.715290 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-28 00:42:49.715300 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-28 00:42:49.715311 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-28 00:42:49.715322 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-28 00:42:49.715333 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-28 00:42:49.715344 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-28 00:42:49.715362 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-28 00:42:49.715381 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-28 00:42:49.715422 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-28 00:42:49.715444 | orchestrator | 2026-03-28 00:42:49.715464 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-28 00:42:49.715481 | orchestrator | Saturday 28 March 2026 00:42:48 +0000 (0:00:02.056) 0:00:06.824 ******** 2026-03-28 00:42:49.715493 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:42:49.715504 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:42:49.715514 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:42:49.715525 | orchestrator | 2026-03-28 00:42:49.715536 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-28 00:42:49.715547 | orchestrator | Saturday 28 March 2026 00:42:48 +0000 (0:00:00.549) 0:00:07.373 ******** 2026-03-28 00:42:49.715588 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:42:49.715600 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:42:49.715611 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:42:49.715622 | orchestrator | 2026-03-28 00:42:49.715633 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:42:49.715646 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:42:49.715659 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:42:49.715690 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:42:49.715702 | orchestrator | 2026-03-28 00:42:49.715713 | orchestrator | 2026-03-28 00:42:49.715724 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:42:49.715735 | orchestrator | Saturday 28 March 2026 00:42:49 +0000 (0:00:00.563) 0:00:07.936 ******** 2026-03-28 00:42:49.715746 | orchestrator | =============================================================================== 2026-03-28 00:42:49.715757 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.06s 2026-03-28 00:42:49.715768 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.57s 2026-03-28 00:42:49.715779 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2026-03-28 00:42:49.715790 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.64s 2026-03-28 00:42:49.715801 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-03-28 00:42:49.715812 | orchestrator | Request device events from the kernel ----------------------------------- 0.56s 2026-03-28 00:42:49.715823 | orchestrator | Reload udev rules ------------------------------------------------------- 0.55s 2026-03-28 00:42:49.715834 | orchestrator | Remove all rook related logical devices --------------------------------- 0.39s 2026-03-28 00:42:49.715845 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2026-03-28 00:43:02.190401 | orchestrator | 2026-03-28 00:43:02 | INFO  | Task 5835177f-867a-4898-ab35-d37c1f1a883b (facts) was prepared for execution. 2026-03-28 00:43:02.190574 | orchestrator | 2026-03-28 00:43:02 | INFO  | It takes a moment until task 5835177f-867a-4898-ab35-d37c1f1a883b (facts) has been started and output is visible here. 2026-03-28 00:43:13.976884 | orchestrator | 2026-03-28 00:43:13.977013 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-28 00:43:13.977031 | orchestrator | 2026-03-28 00:43:13.977044 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 00:43:13.977056 | orchestrator | Saturday 28 March 2026 00:43:06 +0000 (0:00:00.258) 0:00:00.258 ******** 2026-03-28 00:43:13.977067 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:43:13.977079 | orchestrator | ok: [testbed-manager] 2026-03-28 00:43:13.977091 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:43:13.977102 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:43:13.977139 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:43:13.977150 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:13.977161 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:13.977172 | orchestrator | 2026-03-28 00:43:13.977183 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 00:43:13.977194 | orchestrator | Saturday 28 March 2026 00:43:07 +0000 (0:00:01.127) 0:00:01.385 ******** 2026-03-28 00:43:13.977205 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:43:13.977217 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:43:13.977228 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:43:13.977239 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:43:13.977249 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:13.977260 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:13.977271 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:13.977282 | orchestrator | 2026-03-28 00:43:13.977293 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:43:13.977304 | orchestrator | 2026-03-28 00:43:13.977331 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:43:13.977342 | orchestrator | Saturday 28 March 2026 00:43:08 +0000 (0:00:01.274) 0:00:02.660 ******** 2026-03-28 00:43:13.977353 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:43:13.977364 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:43:13.977375 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:43:13.977386 | orchestrator | ok: [testbed-manager] 2026-03-28 00:43:13.977400 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:43:13.977412 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:13.977424 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:13.977437 | orchestrator | 2026-03-28 00:43:13.977450 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 00:43:13.977469 | orchestrator | 2026-03-28 00:43:13.977488 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 00:43:13.977507 | orchestrator | Saturday 28 March 2026 00:43:13 +0000 (0:00:04.625) 0:00:07.285 ******** 2026-03-28 00:43:13.977526 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:43:13.977574 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:43:13.977593 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:43:13.977611 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:43:13.977630 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:13.977649 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:13.977668 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:13.977687 | orchestrator | 2026-03-28 00:43:13.977707 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:43:13.977726 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:43:13.977747 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:43:13.977767 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:43:13.977786 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:43:13.977804 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:43:13.977823 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:43:13.977842 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:43:13.977860 | orchestrator | 2026-03-28 00:43:13.977877 | orchestrator | 2026-03-28 00:43:13.977888 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:43:13.977911 | orchestrator | Saturday 28 March 2026 00:43:13 +0000 (0:00:00.501) 0:00:07.787 ******** 2026-03-28 00:43:13.977922 | orchestrator | =============================================================================== 2026-03-28 00:43:13.977933 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.63s 2026-03-28 00:43:13.977944 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2026-03-28 00:43:13.977955 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2026-03-28 00:43:13.977966 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-28 00:43:16.361322 | orchestrator | 2026-03-28 00:43:16 | INFO  | Task 477e3aa7-c9dd-4a74-bd8e-5e520cc743e5 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-28 00:43:16.361425 | orchestrator | 2026-03-28 00:43:16 | INFO  | It takes a moment until task 477e3aa7-c9dd-4a74-bd8e-5e520cc743e5 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-28 00:43:28.261518 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 00:43:28.261673 | orchestrator | 2.16.14 2026-03-28 00:43:28.261690 | orchestrator | 2026-03-28 00:43:28.261703 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-28 00:43:28.261715 | orchestrator | 2026-03-28 00:43:28.261727 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:43:28.261739 | orchestrator | Saturday 28 March 2026 00:43:20 +0000 (0:00:00.337) 0:00:00.337 ******** 2026-03-28 00:43:28.261750 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 00:43:28.261762 | orchestrator | 2026-03-28 00:43:28.261773 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:43:28.261783 | orchestrator | Saturday 28 March 2026 00:43:21 +0000 (0:00:00.248) 0:00:00.586 ******** 2026-03-28 00:43:28.261794 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:43:28.261807 | orchestrator | 2026-03-28 00:43:28.261826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.261844 | orchestrator | Saturday 28 March 2026 00:43:21 +0000 (0:00:00.232) 0:00:00.818 ******** 2026-03-28 00:43:28.261863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-28 00:43:28.261895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-28 00:43:28.261915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-28 00:43:28.261935 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-28 00:43:28.261953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-28 00:43:28.261971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-28 00:43:28.261986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-28 00:43:28.261996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-28 00:43:28.262007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-28 00:43:28.262084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-28 00:43:28.262099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-28 00:43:28.262112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-28 00:43:28.262124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-28 00:43:28.262136 | orchestrator | 2026-03-28 00:43:28.262148 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.262161 | orchestrator | Saturday 28 March 2026 00:43:21 +0000 (0:00:00.477) 0:00:01.296 ******** 2026-03-28 00:43:28.262206 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.262226 | orchestrator | 2026-03-28 00:43:28.262247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.262265 | orchestrator | Saturday 28 March 2026 00:43:21 +0000 (0:00:00.207) 0:00:01.503 ******** 2026-03-28 00:43:28.262283 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.262303 | orchestrator | 2026-03-28 00:43:28.262321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.262337 | orchestrator | Saturday 28 March 2026 00:43:22 +0000 (0:00:00.203) 0:00:01.707 ******** 2026-03-28 00:43:28.262348 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.262359 | orchestrator | 2026-03-28 00:43:28.262370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.262381 | orchestrator | Saturday 28 March 2026 00:43:22 +0000 (0:00:00.200) 0:00:01.908 ******** 2026-03-28 00:43:28.262397 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.262408 | orchestrator | 2026-03-28 00:43:28.262419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.262430 | orchestrator | Saturday 28 March 2026 00:43:22 +0000 (0:00:00.224) 0:00:02.132 ******** 2026-03-28 00:43:28.262441 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.262452 | orchestrator | 2026-03-28 00:43:28.262463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.262474 | orchestrator | Saturday 28 March 2026 00:43:22 +0000 (0:00:00.191) 0:00:02.324 ******** 2026-03-28 00:43:28.262485 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.262496 | orchestrator | 2026-03-28 00:43:28.262507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.262517 | orchestrator | Saturday 28 March 2026 00:43:23 +0000 (0:00:00.202) 0:00:02.526 ******** 2026-03-28 00:43:28.262528 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.262571 | orchestrator | 2026-03-28 00:43:28.262582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.262593 | orchestrator | Saturday 28 March 2026 00:43:23 +0000 (0:00:00.220) 0:00:02.746 ******** 2026-03-28 00:43:28.262604 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.262614 | orchestrator | 2026-03-28 00:43:28.262625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.262636 | orchestrator | Saturday 28 March 2026 00:43:23 +0000 (0:00:00.203) 0:00:02.950 ******** 2026-03-28 00:43:28.262647 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22) 2026-03-28 00:43:28.262659 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22) 2026-03-28 00:43:28.262670 | orchestrator | 2026-03-28 00:43:28.262681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.262711 | orchestrator | Saturday 28 March 2026 00:43:23 +0000 (0:00:00.402) 0:00:03.353 ******** 2026-03-28 00:43:28.262723 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9560503a-139c-4329-8ffd-1ea1e0c721e5) 2026-03-28 00:43:28.262741 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9560503a-139c-4329-8ffd-1ea1e0c721e5) 2026-03-28 00:43:28.262752 | orchestrator | 2026-03-28 00:43:28.262763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.262774 | orchestrator | Saturday 28 March 2026 00:43:24 +0000 (0:00:00.638) 0:00:03.991 ******** 2026-03-28 00:43:28.262785 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_64213c7d-5962-413c-aa45-2f60eed78f32) 2026-03-28 00:43:28.262795 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_64213c7d-5962-413c-aa45-2f60eed78f32) 2026-03-28 00:43:28.262806 | orchestrator | 2026-03-28 00:43:28.262817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.262827 | orchestrator | Saturday 28 March 2026 00:43:25 +0000 (0:00:00.633) 0:00:04.625 ******** 2026-03-28 00:43:28.262848 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_94eace61-73f7-4993-ae2a-02303df71bb3) 2026-03-28 00:43:28.262860 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_94eace61-73f7-4993-ae2a-02303df71bb3) 2026-03-28 00:43:28.262878 | orchestrator | 2026-03-28 00:43:28.262896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:28.262915 | orchestrator | Saturday 28 March 2026 00:43:26 +0000 (0:00:00.891) 0:00:05.517 ******** 2026-03-28 00:43:28.262933 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:43:28.262952 | orchestrator | 2026-03-28 00:43:28.262971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:28.262984 | orchestrator | Saturday 28 March 2026 00:43:26 +0000 (0:00:00.348) 0:00:05.865 ******** 2026-03-28 00:43:28.262995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-28 00:43:28.263006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-28 00:43:28.263016 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-28 00:43:28.263027 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-28 00:43:28.263038 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-28 00:43:28.263048 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-28 00:43:28.263059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-28 00:43:28.263069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-28 00:43:28.263080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-28 00:43:28.263092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-28 00:43:28.263110 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-28 00:43:28.263129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-28 00:43:28.263146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-28 00:43:28.263164 | orchestrator | 2026-03-28 00:43:28.263175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:28.263186 | orchestrator | Saturday 28 March 2026 00:43:26 +0000 (0:00:00.376) 0:00:06.242 ******** 2026-03-28 00:43:28.263196 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.263207 | orchestrator | 2026-03-28 00:43:28.263218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:28.263229 | orchestrator | Saturday 28 March 2026 00:43:26 +0000 (0:00:00.209) 0:00:06.451 ******** 2026-03-28 00:43:28.263245 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.263265 | orchestrator | 2026-03-28 00:43:28.263284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:28.263302 | orchestrator | Saturday 28 March 2026 00:43:27 +0000 (0:00:00.228) 0:00:06.679 ******** 2026-03-28 00:43:28.263322 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.263340 | orchestrator | 2026-03-28 00:43:28.263359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:28.263379 | orchestrator | Saturday 28 March 2026 00:43:27 +0000 (0:00:00.218) 0:00:06.898 ******** 2026-03-28 00:43:28.263397 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.263416 | orchestrator | 2026-03-28 00:43:28.263435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:28.263447 | orchestrator | Saturday 28 March 2026 00:43:27 +0000 (0:00:00.253) 0:00:07.151 ******** 2026-03-28 00:43:28.263466 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.263477 | orchestrator | 2026-03-28 00:43:28.263488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:28.263499 | orchestrator | Saturday 28 March 2026 00:43:27 +0000 (0:00:00.211) 0:00:07.363 ******** 2026-03-28 00:43:28.263509 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.263520 | orchestrator | 2026-03-28 00:43:28.263531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:28.263574 | orchestrator | Saturday 28 March 2026 00:43:28 +0000 (0:00:00.203) 0:00:07.567 ******** 2026-03-28 00:43:28.263586 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:28.263597 | orchestrator | 2026-03-28 00:43:28.263617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:35.822811 | orchestrator | Saturday 28 March 2026 00:43:28 +0000 (0:00:00.197) 0:00:07.764 ******** 2026-03-28 00:43:35.822903 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.822915 | orchestrator | 2026-03-28 00:43:35.822925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:35.822934 | orchestrator | Saturday 28 March 2026 00:43:28 +0000 (0:00:00.191) 0:00:07.956 ******** 2026-03-28 00:43:35.822942 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-28 00:43:35.822966 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-28 00:43:35.822975 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-28 00:43:35.822983 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-28 00:43:35.822992 | orchestrator | 2026-03-28 00:43:35.823000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:35.823008 | orchestrator | Saturday 28 March 2026 00:43:29 +0000 (0:00:01.041) 0:00:08.998 ******** 2026-03-28 00:43:35.823017 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823025 | orchestrator | 2026-03-28 00:43:35.823033 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:35.823041 | orchestrator | Saturday 28 March 2026 00:43:29 +0000 (0:00:00.203) 0:00:09.201 ******** 2026-03-28 00:43:35.823049 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823057 | orchestrator | 2026-03-28 00:43:35.823065 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:35.823073 | orchestrator | Saturday 28 March 2026 00:43:29 +0000 (0:00:00.195) 0:00:09.396 ******** 2026-03-28 00:43:35.823082 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823089 | orchestrator | 2026-03-28 00:43:35.823097 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:35.823106 | orchestrator | Saturday 28 March 2026 00:43:30 +0000 (0:00:00.202) 0:00:09.599 ******** 2026-03-28 00:43:35.823113 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823121 | orchestrator | 2026-03-28 00:43:35.823129 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-28 00:43:35.823137 | orchestrator | Saturday 28 March 2026 00:43:30 +0000 (0:00:00.183) 0:00:09.783 ******** 2026-03-28 00:43:35.823146 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-28 00:43:35.823154 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-28 00:43:35.823162 | orchestrator | 2026-03-28 00:43:35.823170 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-28 00:43:35.823178 | orchestrator | Saturday 28 March 2026 00:43:30 +0000 (0:00:00.212) 0:00:09.996 ******** 2026-03-28 00:43:35.823186 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823194 | orchestrator | 2026-03-28 00:43:35.823202 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-28 00:43:35.823210 | orchestrator | Saturday 28 March 2026 00:43:30 +0000 (0:00:00.153) 0:00:10.150 ******** 2026-03-28 00:43:35.823218 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823226 | orchestrator | 2026-03-28 00:43:35.823234 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-28 00:43:35.823242 | orchestrator | Saturday 28 March 2026 00:43:30 +0000 (0:00:00.140) 0:00:10.290 ******** 2026-03-28 00:43:35.823282 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823299 | orchestrator | 2026-03-28 00:43:35.823307 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-28 00:43:35.823315 | orchestrator | Saturday 28 March 2026 00:43:30 +0000 (0:00:00.160) 0:00:10.450 ******** 2026-03-28 00:43:35.823323 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:43:35.823331 | orchestrator | 2026-03-28 00:43:35.823339 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-28 00:43:35.823348 | orchestrator | Saturday 28 March 2026 00:43:31 +0000 (0:00:00.141) 0:00:10.592 ******** 2026-03-28 00:43:35.823356 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e282229f-a8c2-5daa-9c69-6eb93429113b'}}) 2026-03-28 00:43:35.823365 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d415d19-3246-5675-b441-c36cba308c79'}}) 2026-03-28 00:43:35.823373 | orchestrator | 2026-03-28 00:43:35.823381 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-28 00:43:35.823390 | orchestrator | Saturday 28 March 2026 00:43:31 +0000 (0:00:00.163) 0:00:10.755 ******** 2026-03-28 00:43:35.823399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e282229f-a8c2-5daa-9c69-6eb93429113b'}})  2026-03-28 00:43:35.823415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d415d19-3246-5675-b441-c36cba308c79'}})  2026-03-28 00:43:35.823423 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823431 | orchestrator | 2026-03-28 00:43:35.823439 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-28 00:43:35.823447 | orchestrator | Saturday 28 March 2026 00:43:31 +0000 (0:00:00.154) 0:00:10.909 ******** 2026-03-28 00:43:35.823455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e282229f-a8c2-5daa-9c69-6eb93429113b'}})  2026-03-28 00:43:35.823464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d415d19-3246-5675-b441-c36cba308c79'}})  2026-03-28 00:43:35.823472 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823480 | orchestrator | 2026-03-28 00:43:35.823488 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-28 00:43:35.823496 | orchestrator | Saturday 28 March 2026 00:43:31 +0000 (0:00:00.346) 0:00:11.256 ******** 2026-03-28 00:43:35.823504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e282229f-a8c2-5daa-9c69-6eb93429113b'}})  2026-03-28 00:43:35.823526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d415d19-3246-5675-b441-c36cba308c79'}})  2026-03-28 00:43:35.823550 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823559 | orchestrator | 2026-03-28 00:43:35.823567 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-28 00:43:35.823575 | orchestrator | Saturday 28 March 2026 00:43:31 +0000 (0:00:00.143) 0:00:11.399 ******** 2026-03-28 00:43:35.823583 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:43:35.823591 | orchestrator | 2026-03-28 00:43:35.823598 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-28 00:43:35.823606 | orchestrator | Saturday 28 March 2026 00:43:32 +0000 (0:00:00.140) 0:00:11.540 ******** 2026-03-28 00:43:35.823614 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:43:35.823622 | orchestrator | 2026-03-28 00:43:35.823630 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-28 00:43:35.823638 | orchestrator | Saturday 28 March 2026 00:43:32 +0000 (0:00:00.140) 0:00:11.681 ******** 2026-03-28 00:43:35.823646 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823654 | orchestrator | 2026-03-28 00:43:35.823662 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-28 00:43:35.823669 | orchestrator | Saturday 28 March 2026 00:43:32 +0000 (0:00:00.148) 0:00:11.829 ******** 2026-03-28 00:43:35.823684 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823692 | orchestrator | 2026-03-28 00:43:35.823700 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-28 00:43:35.823707 | orchestrator | Saturday 28 March 2026 00:43:32 +0000 (0:00:00.130) 0:00:11.960 ******** 2026-03-28 00:43:35.823715 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823723 | orchestrator | 2026-03-28 00:43:35.823731 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-28 00:43:35.823739 | orchestrator | Saturday 28 March 2026 00:43:32 +0000 (0:00:00.154) 0:00:12.114 ******** 2026-03-28 00:43:35.823747 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:43:35.823755 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:43:35.823763 | orchestrator |  "sdb": { 2026-03-28 00:43:35.823771 | orchestrator |  "osd_lvm_uuid": "e282229f-a8c2-5daa-9c69-6eb93429113b" 2026-03-28 00:43:35.823779 | orchestrator |  }, 2026-03-28 00:43:35.823787 | orchestrator |  "sdc": { 2026-03-28 00:43:35.823794 | orchestrator |  "osd_lvm_uuid": "1d415d19-3246-5675-b441-c36cba308c79" 2026-03-28 00:43:35.823802 | orchestrator |  } 2026-03-28 00:43:35.823810 | orchestrator |  } 2026-03-28 00:43:35.823818 | orchestrator | } 2026-03-28 00:43:35.823826 | orchestrator | 2026-03-28 00:43:35.823834 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-28 00:43:35.823847 | orchestrator | Saturday 28 March 2026 00:43:32 +0000 (0:00:00.137) 0:00:12.252 ******** 2026-03-28 00:43:35.823855 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823863 | orchestrator | 2026-03-28 00:43:35.823870 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-28 00:43:35.823878 | orchestrator | Saturday 28 March 2026 00:43:32 +0000 (0:00:00.135) 0:00:12.388 ******** 2026-03-28 00:43:35.823886 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823894 | orchestrator | 2026-03-28 00:43:35.823902 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-28 00:43:35.823910 | orchestrator | Saturday 28 March 2026 00:43:33 +0000 (0:00:00.147) 0:00:12.535 ******** 2026-03-28 00:43:35.823917 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:35.823925 | orchestrator | 2026-03-28 00:43:35.823933 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-28 00:43:35.823941 | orchestrator | Saturday 28 March 2026 00:43:33 +0000 (0:00:00.135) 0:00:12.671 ******** 2026-03-28 00:43:35.823948 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 00:43:35.823956 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-28 00:43:35.823964 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:43:35.823972 | orchestrator |  "sdb": { 2026-03-28 00:43:35.823980 | orchestrator |  "osd_lvm_uuid": "e282229f-a8c2-5daa-9c69-6eb93429113b" 2026-03-28 00:43:35.823988 | orchestrator |  }, 2026-03-28 00:43:35.823996 | orchestrator |  "sdc": { 2026-03-28 00:43:35.824004 | orchestrator |  "osd_lvm_uuid": "1d415d19-3246-5675-b441-c36cba308c79" 2026-03-28 00:43:35.824012 | orchestrator |  } 2026-03-28 00:43:35.824020 | orchestrator |  }, 2026-03-28 00:43:35.824028 | orchestrator |  "lvm_volumes": [ 2026-03-28 00:43:35.824035 | orchestrator |  { 2026-03-28 00:43:35.824043 | orchestrator |  "data": "osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b", 2026-03-28 00:43:35.824051 | orchestrator |  "data_vg": "ceph-e282229f-a8c2-5daa-9c69-6eb93429113b" 2026-03-28 00:43:35.824059 | orchestrator |  }, 2026-03-28 00:43:35.824067 | orchestrator |  { 2026-03-28 00:43:35.824075 | orchestrator |  "data": "osd-block-1d415d19-3246-5675-b441-c36cba308c79", 2026-03-28 00:43:35.824083 | orchestrator |  "data_vg": "ceph-1d415d19-3246-5675-b441-c36cba308c79" 2026-03-28 00:43:35.824091 | orchestrator |  } 2026-03-28 00:43:35.824098 | orchestrator |  ] 2026-03-28 00:43:35.824106 | orchestrator |  } 2026-03-28 00:43:35.824114 | orchestrator | } 2026-03-28 00:43:35.824128 | orchestrator | 2026-03-28 00:43:35.824136 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-28 00:43:35.824144 | orchestrator | Saturday 28 March 2026 00:43:33 +0000 (0:00:00.410) 0:00:13.082 ******** 2026-03-28 00:43:35.824152 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 00:43:35.824160 | orchestrator | 2026-03-28 00:43:35.824168 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-28 00:43:35.824175 | orchestrator | 2026-03-28 00:43:35.824183 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:43:35.824191 | orchestrator | Saturday 28 March 2026 00:43:35 +0000 (0:00:01.764) 0:00:14.846 ******** 2026-03-28 00:43:35.824199 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-28 00:43:35.824207 | orchestrator | 2026-03-28 00:43:35.824215 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:43:35.824223 | orchestrator | Saturday 28 March 2026 00:43:35 +0000 (0:00:00.246) 0:00:15.093 ******** 2026-03-28 00:43:35.824231 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:35.824238 | orchestrator | 2026-03-28 00:43:35.824251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668066 | orchestrator | Saturday 28 March 2026 00:43:35 +0000 (0:00:00.239) 0:00:15.332 ******** 2026-03-28 00:43:43.668156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-28 00:43:43.668169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-28 00:43:43.668179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-28 00:43:43.668187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-28 00:43:43.668196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-28 00:43:43.668205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-28 00:43:43.668214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-28 00:43:43.668236 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-28 00:43:43.668245 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-28 00:43:43.668254 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-28 00:43:43.668263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-28 00:43:43.668271 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-28 00:43:43.668284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-28 00:43:43.668294 | orchestrator | 2026-03-28 00:43:43.668303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668313 | orchestrator | Saturday 28 March 2026 00:43:36 +0000 (0:00:00.393) 0:00:15.725 ******** 2026-03-28 00:43:43.668322 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.668340 | orchestrator | 2026-03-28 00:43:43.668350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668359 | orchestrator | Saturday 28 March 2026 00:43:36 +0000 (0:00:00.224) 0:00:15.949 ******** 2026-03-28 00:43:43.668368 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.668376 | orchestrator | 2026-03-28 00:43:43.668385 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668394 | orchestrator | Saturday 28 March 2026 00:43:36 +0000 (0:00:00.204) 0:00:16.154 ******** 2026-03-28 00:43:43.668403 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.668412 | orchestrator | 2026-03-28 00:43:43.668420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668429 | orchestrator | Saturday 28 March 2026 00:43:36 +0000 (0:00:00.185) 0:00:16.340 ******** 2026-03-28 00:43:43.668458 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.668467 | orchestrator | 2026-03-28 00:43:43.668476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668485 | orchestrator | Saturday 28 March 2026 00:43:37 +0000 (0:00:00.177) 0:00:16.518 ******** 2026-03-28 00:43:43.668493 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.668502 | orchestrator | 2026-03-28 00:43:43.668511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668559 | orchestrator | Saturday 28 March 2026 00:43:37 +0000 (0:00:00.650) 0:00:17.169 ******** 2026-03-28 00:43:43.668569 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.668578 | orchestrator | 2026-03-28 00:43:43.668587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668595 | orchestrator | Saturday 28 March 2026 00:43:37 +0000 (0:00:00.232) 0:00:17.402 ******** 2026-03-28 00:43:43.668604 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.668613 | orchestrator | 2026-03-28 00:43:43.668621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668630 | orchestrator | Saturday 28 March 2026 00:43:38 +0000 (0:00:00.192) 0:00:17.595 ******** 2026-03-28 00:43:43.668639 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.668647 | orchestrator | 2026-03-28 00:43:43.668656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668665 | orchestrator | Saturday 28 March 2026 00:43:38 +0000 (0:00:00.197) 0:00:17.792 ******** 2026-03-28 00:43:43.668674 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7) 2026-03-28 00:43:43.668683 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7) 2026-03-28 00:43:43.668692 | orchestrator | 2026-03-28 00:43:43.668701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668710 | orchestrator | Saturday 28 March 2026 00:43:38 +0000 (0:00:00.446) 0:00:18.239 ******** 2026-03-28 00:43:43.668719 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4cb6368c-0066-4efd-8388-81f1557a02ca) 2026-03-28 00:43:43.668727 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4cb6368c-0066-4efd-8388-81f1557a02ca) 2026-03-28 00:43:43.668736 | orchestrator | 2026-03-28 00:43:43.668745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668753 | orchestrator | Saturday 28 March 2026 00:43:39 +0000 (0:00:00.438) 0:00:18.677 ******** 2026-03-28 00:43:43.668762 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b9aebbdd-9418-41ff-9099-90b7dcb703f9) 2026-03-28 00:43:43.668770 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b9aebbdd-9418-41ff-9099-90b7dcb703f9) 2026-03-28 00:43:43.668779 | orchestrator | 2026-03-28 00:43:43.668788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668808 | orchestrator | Saturday 28 March 2026 00:43:39 +0000 (0:00:00.556) 0:00:19.233 ******** 2026-03-28 00:43:43.668818 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f8ddcfbb-f935-4942-af25-8ac280f1cc67) 2026-03-28 00:43:43.668827 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f8ddcfbb-f935-4942-af25-8ac280f1cc67) 2026-03-28 00:43:43.668836 | orchestrator | 2026-03-28 00:43:43.668850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:43.668859 | orchestrator | Saturday 28 March 2026 00:43:40 +0000 (0:00:00.501) 0:00:19.735 ******** 2026-03-28 00:43:43.668867 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:43:43.668876 | orchestrator | 2026-03-28 00:43:43.668885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:43.668894 | orchestrator | Saturday 28 March 2026 00:43:40 +0000 (0:00:00.359) 0:00:20.094 ******** 2026-03-28 00:43:43.668902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-28 00:43:43.668917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-28 00:43:43.668926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-28 00:43:43.668934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-28 00:43:43.668943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-28 00:43:43.668951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-28 00:43:43.668960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-28 00:43:43.668974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-28 00:43:43.668989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-28 00:43:43.669004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-28 00:43:43.669020 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-28 00:43:43.669036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-28 00:43:43.669052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-28 00:43:43.669068 | orchestrator | 2026-03-28 00:43:43.669083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:43.669098 | orchestrator | Saturday 28 March 2026 00:43:40 +0000 (0:00:00.374) 0:00:20.468 ******** 2026-03-28 00:43:43.669115 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.669130 | orchestrator | 2026-03-28 00:43:43.669139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:43.669148 | orchestrator | Saturday 28 March 2026 00:43:41 +0000 (0:00:00.570) 0:00:21.038 ******** 2026-03-28 00:43:43.669157 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.669166 | orchestrator | 2026-03-28 00:43:43.669174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:43.669183 | orchestrator | Saturday 28 March 2026 00:43:41 +0000 (0:00:00.176) 0:00:21.215 ******** 2026-03-28 00:43:43.669191 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.669200 | orchestrator | 2026-03-28 00:43:43.669208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:43.669218 | orchestrator | Saturday 28 March 2026 00:43:41 +0000 (0:00:00.189) 0:00:21.405 ******** 2026-03-28 00:43:43.669234 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.669248 | orchestrator | 2026-03-28 00:43:43.669262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:43.669274 | orchestrator | Saturday 28 March 2026 00:43:42 +0000 (0:00:00.170) 0:00:21.576 ******** 2026-03-28 00:43:43.669287 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.669301 | orchestrator | 2026-03-28 00:43:43.669315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:43.669328 | orchestrator | Saturday 28 March 2026 00:43:42 +0000 (0:00:00.170) 0:00:21.746 ******** 2026-03-28 00:43:43.669340 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.669353 | orchestrator | 2026-03-28 00:43:43.669367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:43.669380 | orchestrator | Saturday 28 March 2026 00:43:42 +0000 (0:00:00.182) 0:00:21.929 ******** 2026-03-28 00:43:43.669394 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.669407 | orchestrator | 2026-03-28 00:43:43.669419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:43.669432 | orchestrator | Saturday 28 March 2026 00:43:42 +0000 (0:00:00.163) 0:00:22.092 ******** 2026-03-28 00:43:43.669445 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:43.669470 | orchestrator | 2026-03-28 00:43:43.669484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:43.669497 | orchestrator | Saturday 28 March 2026 00:43:42 +0000 (0:00:00.164) 0:00:22.257 ******** 2026-03-28 00:43:43.669512 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-28 00:43:43.669565 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-28 00:43:43.669581 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-28 00:43:43.669594 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-28 00:43:43.669603 | orchestrator | 2026-03-28 00:43:43.669612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:43.669621 | orchestrator | Saturday 28 March 2026 00:43:43 +0000 (0:00:00.729) 0:00:22.986 ******** 2026-03-28 00:43:43.669630 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.312651 | orchestrator | 2026-03-28 00:43:50.312785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:50.312807 | orchestrator | Saturday 28 March 2026 00:43:43 +0000 (0:00:00.192) 0:00:23.178 ******** 2026-03-28 00:43:50.312820 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.312832 | orchestrator | 2026-03-28 00:43:50.312849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:50.312891 | orchestrator | Saturday 28 March 2026 00:43:43 +0000 (0:00:00.190) 0:00:23.368 ******** 2026-03-28 00:43:50.312914 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.312933 | orchestrator | 2026-03-28 00:43:50.312947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:50.312959 | orchestrator | Saturday 28 March 2026 00:43:44 +0000 (0:00:00.177) 0:00:23.546 ******** 2026-03-28 00:43:50.312970 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.312981 | orchestrator | 2026-03-28 00:43:50.312992 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-28 00:43:50.313003 | orchestrator | Saturday 28 March 2026 00:43:44 +0000 (0:00:00.558) 0:00:24.104 ******** 2026-03-28 00:43:50.313014 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-28 00:43:50.313025 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-28 00:43:50.313036 | orchestrator | 2026-03-28 00:43:50.313047 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-28 00:43:50.313058 | orchestrator | Saturday 28 March 2026 00:43:44 +0000 (0:00:00.188) 0:00:24.292 ******** 2026-03-28 00:43:50.313069 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.313080 | orchestrator | 2026-03-28 00:43:50.313092 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-28 00:43:50.313103 | orchestrator | Saturday 28 March 2026 00:43:44 +0000 (0:00:00.130) 0:00:24.423 ******** 2026-03-28 00:43:50.313113 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.313124 | orchestrator | 2026-03-28 00:43:50.313136 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-28 00:43:50.313147 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.127) 0:00:24.550 ******** 2026-03-28 00:43:50.313157 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.313168 | orchestrator | 2026-03-28 00:43:50.313179 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-28 00:43:50.313190 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.139) 0:00:24.690 ******** 2026-03-28 00:43:50.313201 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:50.313213 | orchestrator | 2026-03-28 00:43:50.313224 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-28 00:43:50.313235 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.139) 0:00:24.829 ******** 2026-03-28 00:43:50.313246 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de32c164-f4a0-5092-ad33-650515756f9d'}}) 2026-03-28 00:43:50.313257 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '65811f0f-7bf7-557a-9618-106707fc2899'}}) 2026-03-28 00:43:50.313291 | orchestrator | 2026-03-28 00:43:50.313303 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-28 00:43:50.313314 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.186) 0:00:25.015 ******** 2026-03-28 00:43:50.313325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de32c164-f4a0-5092-ad33-650515756f9d'}})  2026-03-28 00:43:50.313337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '65811f0f-7bf7-557a-9618-106707fc2899'}})  2026-03-28 00:43:50.313348 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.313359 | orchestrator | 2026-03-28 00:43:50.313370 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-28 00:43:50.313381 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.138) 0:00:25.154 ******** 2026-03-28 00:43:50.313392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de32c164-f4a0-5092-ad33-650515756f9d'}})  2026-03-28 00:43:50.313403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '65811f0f-7bf7-557a-9618-106707fc2899'}})  2026-03-28 00:43:50.313413 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.313424 | orchestrator | 2026-03-28 00:43:50.313435 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-28 00:43:50.313446 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.166) 0:00:25.320 ******** 2026-03-28 00:43:50.313457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de32c164-f4a0-5092-ad33-650515756f9d'}})  2026-03-28 00:43:50.313469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '65811f0f-7bf7-557a-9618-106707fc2899'}})  2026-03-28 00:43:50.313480 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.313491 | orchestrator | 2026-03-28 00:43:50.313502 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-28 00:43:50.313513 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.167) 0:00:25.488 ******** 2026-03-28 00:43:50.313550 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:50.313563 | orchestrator | 2026-03-28 00:43:50.313574 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-28 00:43:50.313585 | orchestrator | Saturday 28 March 2026 00:43:46 +0000 (0:00:00.139) 0:00:25.627 ******** 2026-03-28 00:43:50.313596 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:50.313607 | orchestrator | 2026-03-28 00:43:50.313617 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-28 00:43:50.313628 | orchestrator | Saturday 28 March 2026 00:43:46 +0000 (0:00:00.161) 0:00:25.789 ******** 2026-03-28 00:43:50.313659 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.313671 | orchestrator | 2026-03-28 00:43:50.313682 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-28 00:43:50.313693 | orchestrator | Saturday 28 March 2026 00:43:46 +0000 (0:00:00.344) 0:00:26.133 ******** 2026-03-28 00:43:50.313703 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.313714 | orchestrator | 2026-03-28 00:43:50.313725 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-28 00:43:50.313736 | orchestrator | Saturday 28 March 2026 00:43:46 +0000 (0:00:00.142) 0:00:26.276 ******** 2026-03-28 00:43:50.313747 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.313758 | orchestrator | 2026-03-28 00:43:50.313769 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-28 00:43:50.313779 | orchestrator | Saturday 28 March 2026 00:43:46 +0000 (0:00:00.149) 0:00:26.425 ******** 2026-03-28 00:43:50.313790 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:43:50.313801 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:43:50.313812 | orchestrator |  "sdb": { 2026-03-28 00:43:50.313823 | orchestrator |  "osd_lvm_uuid": "de32c164-f4a0-5092-ad33-650515756f9d" 2026-03-28 00:43:50.313834 | orchestrator |  }, 2026-03-28 00:43:50.313853 | orchestrator |  "sdc": { 2026-03-28 00:43:50.313871 | orchestrator |  "osd_lvm_uuid": "65811f0f-7bf7-557a-9618-106707fc2899" 2026-03-28 00:43:50.313882 | orchestrator |  } 2026-03-28 00:43:50.313893 | orchestrator |  } 2026-03-28 00:43:50.313904 | orchestrator | } 2026-03-28 00:43:50.313915 | orchestrator | 2026-03-28 00:43:50.313926 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-28 00:43:50.313937 | orchestrator | Saturday 28 March 2026 00:43:47 +0000 (0:00:00.154) 0:00:26.579 ******** 2026-03-28 00:43:50.313948 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.313959 | orchestrator | 2026-03-28 00:43:50.313970 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-28 00:43:50.313981 | orchestrator | Saturday 28 March 2026 00:43:47 +0000 (0:00:00.145) 0:00:26.725 ******** 2026-03-28 00:43:50.313991 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.314002 | orchestrator | 2026-03-28 00:43:50.314013 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-28 00:43:50.314097 | orchestrator | Saturday 28 March 2026 00:43:47 +0000 (0:00:00.138) 0:00:26.864 ******** 2026-03-28 00:43:50.314108 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:50.314119 | orchestrator | 2026-03-28 00:43:50.314130 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-28 00:43:50.314141 | orchestrator | Saturday 28 March 2026 00:43:47 +0000 (0:00:00.143) 0:00:27.008 ******** 2026-03-28 00:43:50.314152 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 00:43:50.314163 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-28 00:43:50.314174 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:43:50.314222 | orchestrator |  "sdb": { 2026-03-28 00:43:50.314238 | orchestrator |  "osd_lvm_uuid": "de32c164-f4a0-5092-ad33-650515756f9d" 2026-03-28 00:43:50.314259 | orchestrator |  }, 2026-03-28 00:43:50.314280 | orchestrator |  "sdc": { 2026-03-28 00:43:50.314297 | orchestrator |  "osd_lvm_uuid": "65811f0f-7bf7-557a-9618-106707fc2899" 2026-03-28 00:43:50.314315 | orchestrator |  } 2026-03-28 00:43:50.314331 | orchestrator |  }, 2026-03-28 00:43:50.314348 | orchestrator |  "lvm_volumes": [ 2026-03-28 00:43:50.314366 | orchestrator |  { 2026-03-28 00:43:50.314383 | orchestrator |  "data": "osd-block-de32c164-f4a0-5092-ad33-650515756f9d", 2026-03-28 00:43:50.314402 | orchestrator |  "data_vg": "ceph-de32c164-f4a0-5092-ad33-650515756f9d" 2026-03-28 00:43:50.314419 | orchestrator |  }, 2026-03-28 00:43:50.314436 | orchestrator |  { 2026-03-28 00:43:50.314454 | orchestrator |  "data": "osd-block-65811f0f-7bf7-557a-9618-106707fc2899", 2026-03-28 00:43:50.314471 | orchestrator |  "data_vg": "ceph-65811f0f-7bf7-557a-9618-106707fc2899" 2026-03-28 00:43:50.314490 | orchestrator |  } 2026-03-28 00:43:50.314508 | orchestrator |  ] 2026-03-28 00:43:50.314554 | orchestrator |  } 2026-03-28 00:43:50.314572 | orchestrator | } 2026-03-28 00:43:50.314584 | orchestrator | 2026-03-28 00:43:50.314594 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-28 00:43:50.314605 | orchestrator | Saturday 28 March 2026 00:43:47 +0000 (0:00:00.212) 0:00:27.220 ******** 2026-03-28 00:43:50.314616 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-28 00:43:50.314627 | orchestrator | 2026-03-28 00:43:50.314638 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-28 00:43:50.314649 | orchestrator | 2026-03-28 00:43:50.314659 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:43:50.314670 | orchestrator | Saturday 28 March 2026 00:43:48 +0000 (0:00:01.160) 0:00:28.381 ******** 2026-03-28 00:43:50.314681 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-28 00:43:50.314692 | orchestrator | 2026-03-28 00:43:50.314703 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:43:50.314725 | orchestrator | Saturday 28 March 2026 00:43:49 +0000 (0:00:00.788) 0:00:29.169 ******** 2026-03-28 00:43:50.314736 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:50.314747 | orchestrator | 2026-03-28 00:43:50.314758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:50.314769 | orchestrator | Saturday 28 March 2026 00:43:49 +0000 (0:00:00.234) 0:00:29.404 ******** 2026-03-28 00:43:50.314780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-28 00:43:50.314790 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-28 00:43:50.314801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-28 00:43:50.314812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-28 00:43:50.314823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-28 00:43:50.314847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-28 00:43:57.827742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-28 00:43:57.827838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-28 00:43:57.827854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-28 00:43:57.827866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-28 00:43:57.827885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-28 00:43:57.827913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-28 00:43:57.827937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-28 00:43:57.827957 | orchestrator | 2026-03-28 00:43:57.827978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:57.827998 | orchestrator | Saturday 28 March 2026 00:43:50 +0000 (0:00:00.412) 0:00:29.817 ******** 2026-03-28 00:43:57.828017 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.828039 | orchestrator | 2026-03-28 00:43:57.828053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:57.828063 | orchestrator | Saturday 28 March 2026 00:43:50 +0000 (0:00:00.208) 0:00:30.026 ******** 2026-03-28 00:43:57.828074 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.828085 | orchestrator | 2026-03-28 00:43:57.828095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:57.828106 | orchestrator | Saturday 28 March 2026 00:43:50 +0000 (0:00:00.243) 0:00:30.269 ******** 2026-03-28 00:43:57.828117 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.828127 | orchestrator | 2026-03-28 00:43:57.828138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:57.828149 | orchestrator | Saturday 28 March 2026 00:43:51 +0000 (0:00:00.247) 0:00:30.517 ******** 2026-03-28 00:43:57.828160 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.828171 | orchestrator | 2026-03-28 00:43:57.828181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:57.828192 | orchestrator | Saturday 28 March 2026 00:43:51 +0000 (0:00:00.236) 0:00:30.754 ******** 2026-03-28 00:43:57.828203 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.828213 | orchestrator | 2026-03-28 00:43:57.828224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:57.828235 | orchestrator | Saturday 28 March 2026 00:43:51 +0000 (0:00:00.212) 0:00:30.966 ******** 2026-03-28 00:43:57.828246 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.828257 | orchestrator | 2026-03-28 00:43:57.828285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:57.828299 | orchestrator | Saturday 28 March 2026 00:43:51 +0000 (0:00:00.180) 0:00:31.147 ******** 2026-03-28 00:43:57.828330 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.828343 | orchestrator | 2026-03-28 00:43:57.828356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:57.828369 | orchestrator | Saturday 28 March 2026 00:43:51 +0000 (0:00:00.249) 0:00:31.396 ******** 2026-03-28 00:43:57.828381 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.828394 | orchestrator | 2026-03-28 00:43:57.828407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:57.828420 | orchestrator | Saturday 28 March 2026 00:43:52 +0000 (0:00:00.207) 0:00:31.603 ******** 2026-03-28 00:43:57.828433 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01) 2026-03-28 00:43:57.828447 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01) 2026-03-28 00:43:57.828459 | orchestrator | 2026-03-28 00:43:57.828472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:57.828486 | orchestrator | Saturday 28 March 2026 00:43:52 +0000 (0:00:00.771) 0:00:32.375 ******** 2026-03-28 00:43:57.828498 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d59a946d-61ee-4c80-a151-abde4d1a3094) 2026-03-28 00:43:57.828511 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d59a946d-61ee-4c80-a151-abde4d1a3094) 2026-03-28 00:43:57.828542 | orchestrator | 2026-03-28 00:43:57.828553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:57.828564 | orchestrator | Saturday 28 March 2026 00:43:53 +0000 (0:00:00.475) 0:00:32.850 ******** 2026-03-28 00:43:57.828575 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_adec6741-41cb-49e2-9389-e6d1302151a0) 2026-03-28 00:43:57.828586 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_adec6741-41cb-49e2-9389-e6d1302151a0) 2026-03-28 00:43:57.828597 | orchestrator | 2026-03-28 00:43:57.828607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:57.828619 | orchestrator | Saturday 28 March 2026 00:43:53 +0000 (0:00:00.373) 0:00:33.224 ******** 2026-03-28 00:43:57.828629 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_86e8f6ba-fcdd-41b8-9839-c0061159d97d) 2026-03-28 00:43:57.828640 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_86e8f6ba-fcdd-41b8-9839-c0061159d97d) 2026-03-28 00:43:57.828651 | orchestrator | 2026-03-28 00:43:57.828662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:57.828673 | orchestrator | Saturday 28 March 2026 00:43:54 +0000 (0:00:00.355) 0:00:33.580 ******** 2026-03-28 00:43:57.828683 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:43:57.828695 | orchestrator | 2026-03-28 00:43:57.828706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.828734 | orchestrator | Saturday 28 March 2026 00:43:54 +0000 (0:00:00.241) 0:00:33.821 ******** 2026-03-28 00:43:57.828745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-28 00:43:57.828756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-28 00:43:57.828767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-28 00:43:57.828778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-28 00:43:57.828789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-28 00:43:57.828799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-28 00:43:57.828810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-28 00:43:57.828821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-28 00:43:57.828840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-28 00:43:57.828851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-28 00:43:57.828862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-28 00:43:57.828872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-28 00:43:57.828883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-28 00:43:57.828894 | orchestrator | 2026-03-28 00:43:57.828905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.828916 | orchestrator | Saturday 28 March 2026 00:43:54 +0000 (0:00:00.285) 0:00:34.106 ******** 2026-03-28 00:43:57.828927 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.828938 | orchestrator | 2026-03-28 00:43:57.828948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.828959 | orchestrator | Saturday 28 March 2026 00:43:54 +0000 (0:00:00.146) 0:00:34.253 ******** 2026-03-28 00:43:57.828975 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.828995 | orchestrator | 2026-03-28 00:43:57.829014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.829033 | orchestrator | Saturday 28 March 2026 00:43:54 +0000 (0:00:00.163) 0:00:34.416 ******** 2026-03-28 00:43:57.829052 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.829071 | orchestrator | 2026-03-28 00:43:57.829090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.829109 | orchestrator | Saturday 28 March 2026 00:43:55 +0000 (0:00:00.157) 0:00:34.574 ******** 2026-03-28 00:43:57.829128 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.829148 | orchestrator | 2026-03-28 00:43:57.829170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.829190 | orchestrator | Saturday 28 March 2026 00:43:55 +0000 (0:00:00.168) 0:00:34.743 ******** 2026-03-28 00:43:57.829210 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.829232 | orchestrator | 2026-03-28 00:43:57.829253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.829275 | orchestrator | Saturday 28 March 2026 00:43:55 +0000 (0:00:00.186) 0:00:34.929 ******** 2026-03-28 00:43:57.829296 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.829314 | orchestrator | 2026-03-28 00:43:57.829325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.829336 | orchestrator | Saturday 28 March 2026 00:43:55 +0000 (0:00:00.576) 0:00:35.506 ******** 2026-03-28 00:43:57.829347 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.829357 | orchestrator | 2026-03-28 00:43:57.829368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.829379 | orchestrator | Saturday 28 March 2026 00:43:56 +0000 (0:00:00.256) 0:00:35.762 ******** 2026-03-28 00:43:57.829390 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.829401 | orchestrator | 2026-03-28 00:43:57.829412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.829423 | orchestrator | Saturday 28 March 2026 00:43:56 +0000 (0:00:00.181) 0:00:35.943 ******** 2026-03-28 00:43:57.829434 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-28 00:43:57.829445 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-28 00:43:57.829456 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-28 00:43:57.829467 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-28 00:43:57.829477 | orchestrator | 2026-03-28 00:43:57.829488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.829499 | orchestrator | Saturday 28 March 2026 00:43:57 +0000 (0:00:00.629) 0:00:36.573 ******** 2026-03-28 00:43:57.829510 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.829547 | orchestrator | 2026-03-28 00:43:57.829570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.829589 | orchestrator | Saturday 28 March 2026 00:43:57 +0000 (0:00:00.177) 0:00:36.751 ******** 2026-03-28 00:43:57.829601 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.829612 | orchestrator | 2026-03-28 00:43:57.829623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.829634 | orchestrator | Saturday 28 March 2026 00:43:57 +0000 (0:00:00.193) 0:00:36.945 ******** 2026-03-28 00:43:57.829645 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.829656 | orchestrator | 2026-03-28 00:43:57.829667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:57.829677 | orchestrator | Saturday 28 March 2026 00:43:57 +0000 (0:00:00.189) 0:00:37.134 ******** 2026-03-28 00:43:57.829689 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:57.829699 | orchestrator | 2026-03-28 00:43:57.829720 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-28 00:44:01.579857 | orchestrator | Saturday 28 March 2026 00:43:57 +0000 (0:00:00.199) 0:00:37.334 ******** 2026-03-28 00:44:01.579962 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-28 00:44:01.579987 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-28 00:44:01.580007 | orchestrator | 2026-03-28 00:44:01.580027 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-28 00:44:01.580047 | orchestrator | Saturday 28 March 2026 00:43:57 +0000 (0:00:00.124) 0:00:37.459 ******** 2026-03-28 00:44:01.580066 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:01.580086 | orchestrator | 2026-03-28 00:44:01.580104 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-28 00:44:01.580122 | orchestrator | Saturday 28 March 2026 00:43:58 +0000 (0:00:00.090) 0:00:37.549 ******** 2026-03-28 00:44:01.580140 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:01.580158 | orchestrator | 2026-03-28 00:44:01.580177 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-28 00:44:01.580197 | orchestrator | Saturday 28 March 2026 00:43:58 +0000 (0:00:00.092) 0:00:37.642 ******** 2026-03-28 00:44:01.580215 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:01.580233 | orchestrator | 2026-03-28 00:44:01.580251 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-28 00:44:01.580269 | orchestrator | Saturday 28 March 2026 00:43:58 +0000 (0:00:00.257) 0:00:37.899 ******** 2026-03-28 00:44:01.580288 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:44:01.580308 | orchestrator | 2026-03-28 00:44:01.580326 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-28 00:44:01.580344 | orchestrator | Saturday 28 March 2026 00:43:58 +0000 (0:00:00.137) 0:00:38.037 ******** 2026-03-28 00:44:01.580363 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8b5a6aab-ec84-598a-adc7-d040a5844549'}}) 2026-03-28 00:44:01.580382 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'}}) 2026-03-28 00:44:01.580401 | orchestrator | 2026-03-28 00:44:01.580421 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-28 00:44:01.580440 | orchestrator | Saturday 28 March 2026 00:43:58 +0000 (0:00:00.150) 0:00:38.187 ******** 2026-03-28 00:44:01.580459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8b5a6aab-ec84-598a-adc7-d040a5844549'}})  2026-03-28 00:44:01.580495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'}})  2026-03-28 00:44:01.580516 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:01.580577 | orchestrator | 2026-03-28 00:44:01.580596 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-28 00:44:01.580615 | orchestrator | Saturday 28 March 2026 00:43:58 +0000 (0:00:00.138) 0:00:38.326 ******** 2026-03-28 00:44:01.580634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8b5a6aab-ec84-598a-adc7-d040a5844549'}})  2026-03-28 00:44:01.580681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'}})  2026-03-28 00:44:01.580700 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:01.580719 | orchestrator | 2026-03-28 00:44:01.580738 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-28 00:44:01.580758 | orchestrator | Saturday 28 March 2026 00:43:58 +0000 (0:00:00.149) 0:00:38.476 ******** 2026-03-28 00:44:01.580776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8b5a6aab-ec84-598a-adc7-d040a5844549'}})  2026-03-28 00:44:01.580794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'}})  2026-03-28 00:44:01.580813 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:01.580831 | orchestrator | 2026-03-28 00:44:01.580849 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-28 00:44:01.580868 | orchestrator | Saturday 28 March 2026 00:43:59 +0000 (0:00:00.149) 0:00:38.625 ******** 2026-03-28 00:44:01.580887 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:44:01.580905 | orchestrator | 2026-03-28 00:44:01.580923 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-28 00:44:01.580940 | orchestrator | Saturday 28 March 2026 00:43:59 +0000 (0:00:00.109) 0:00:38.734 ******** 2026-03-28 00:44:01.580959 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:44:01.580978 | orchestrator | 2026-03-28 00:44:01.580997 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-28 00:44:01.581015 | orchestrator | Saturday 28 March 2026 00:43:59 +0000 (0:00:00.125) 0:00:38.860 ******** 2026-03-28 00:44:01.581033 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:01.581052 | orchestrator | 2026-03-28 00:44:01.581071 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-28 00:44:01.581090 | orchestrator | Saturday 28 March 2026 00:43:59 +0000 (0:00:00.136) 0:00:38.996 ******** 2026-03-28 00:44:01.581109 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:01.581127 | orchestrator | 2026-03-28 00:44:01.581145 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-28 00:44:01.581163 | orchestrator | Saturday 28 March 2026 00:43:59 +0000 (0:00:00.193) 0:00:39.189 ******** 2026-03-28 00:44:01.581181 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:01.581200 | orchestrator | 2026-03-28 00:44:01.581219 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-28 00:44:01.581237 | orchestrator | Saturday 28 March 2026 00:43:59 +0000 (0:00:00.118) 0:00:39.308 ******** 2026-03-28 00:44:01.581256 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:44:01.581275 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:44:01.581293 | orchestrator |  "sdb": { 2026-03-28 00:44:01.581334 | orchestrator |  "osd_lvm_uuid": "8b5a6aab-ec84-598a-adc7-d040a5844549" 2026-03-28 00:44:01.581356 | orchestrator |  }, 2026-03-28 00:44:01.581374 | orchestrator |  "sdc": { 2026-03-28 00:44:01.581392 | orchestrator |  "osd_lvm_uuid": "02fe8db3-ee90-5f59-9f4e-fa58d6febfbe" 2026-03-28 00:44:01.581410 | orchestrator |  } 2026-03-28 00:44:01.581429 | orchestrator |  } 2026-03-28 00:44:01.581450 | orchestrator | } 2026-03-28 00:44:01.581469 | orchestrator | 2026-03-28 00:44:01.581487 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-28 00:44:01.581508 | orchestrator | Saturday 28 March 2026 00:43:59 +0000 (0:00:00.127) 0:00:39.435 ******** 2026-03-28 00:44:01.581558 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:01.581571 | orchestrator | 2026-03-28 00:44:01.581582 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-28 00:44:01.581593 | orchestrator | Saturday 28 March 2026 00:44:00 +0000 (0:00:00.247) 0:00:39.682 ******** 2026-03-28 00:44:01.581604 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:01.581625 | orchestrator | 2026-03-28 00:44:01.581636 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-28 00:44:01.581647 | orchestrator | Saturday 28 March 2026 00:44:00 +0000 (0:00:00.122) 0:00:39.805 ******** 2026-03-28 00:44:01.581658 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:01.581669 | orchestrator | 2026-03-28 00:44:01.581680 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-28 00:44:01.581691 | orchestrator | Saturday 28 March 2026 00:44:00 +0000 (0:00:00.109) 0:00:39.915 ******** 2026-03-28 00:44:01.581702 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 00:44:01.581713 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-28 00:44:01.581723 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:44:01.581734 | orchestrator |  "sdb": { 2026-03-28 00:44:01.581745 | orchestrator |  "osd_lvm_uuid": "8b5a6aab-ec84-598a-adc7-d040a5844549" 2026-03-28 00:44:01.581756 | orchestrator |  }, 2026-03-28 00:44:01.581767 | orchestrator |  "sdc": { 2026-03-28 00:44:01.581778 | orchestrator |  "osd_lvm_uuid": "02fe8db3-ee90-5f59-9f4e-fa58d6febfbe" 2026-03-28 00:44:01.581789 | orchestrator |  } 2026-03-28 00:44:01.581799 | orchestrator |  }, 2026-03-28 00:44:01.581810 | orchestrator |  "lvm_volumes": [ 2026-03-28 00:44:01.581821 | orchestrator |  { 2026-03-28 00:44:01.581832 | orchestrator |  "data": "osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549", 2026-03-28 00:44:01.581843 | orchestrator |  "data_vg": "ceph-8b5a6aab-ec84-598a-adc7-d040a5844549" 2026-03-28 00:44:01.581854 | orchestrator |  }, 2026-03-28 00:44:01.581865 | orchestrator |  { 2026-03-28 00:44:01.581876 | orchestrator |  "data": "osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe", 2026-03-28 00:44:01.581896 | orchestrator |  "data_vg": "ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe" 2026-03-28 00:44:01.581906 | orchestrator |  } 2026-03-28 00:44:01.581916 | orchestrator |  ] 2026-03-28 00:44:01.581929 | orchestrator |  } 2026-03-28 00:44:01.581939 | orchestrator | } 2026-03-28 00:44:01.581948 | orchestrator | 2026-03-28 00:44:01.581958 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-28 00:44:01.581968 | orchestrator | Saturday 28 March 2026 00:44:00 +0000 (0:00:00.225) 0:00:40.141 ******** 2026-03-28 00:44:01.581977 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-28 00:44:01.581987 | orchestrator | 2026-03-28 00:44:01.581996 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:44:01.582006 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 00:44:01.582066 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 00:44:01.582079 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 00:44:01.582089 | orchestrator | 2026-03-28 00:44:01.582099 | orchestrator | 2026-03-28 00:44:01.582108 | orchestrator | 2026-03-28 00:44:01.582118 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:44:01.582127 | orchestrator | Saturday 28 March 2026 00:44:01 +0000 (0:00:00.921) 0:00:41.063 ******** 2026-03-28 00:44:01.582137 | orchestrator | =============================================================================== 2026-03-28 00:44:01.582147 | orchestrator | Write configuration file ------------------------------------------------ 3.85s 2026-03-28 00:44:01.582156 | orchestrator | Add known links to the list of available block devices ------------------ 1.28s 2026-03-28 00:44:01.582166 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.28s 2026-03-28 00:44:01.582175 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2026-03-28 00:44:01.582193 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2026-03-28 00:44:01.582202 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2026-03-28 00:44:01.582212 | orchestrator | Print configuration data ------------------------------------------------ 0.85s 2026-03-28 00:44:01.582221 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2026-03-28 00:44:01.582234 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-03-28 00:44:01.582251 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2026-03-28 00:44:01.582266 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.66s 2026-03-28 00:44:01.582282 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-03-28 00:44:01.582298 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-03-28 00:44:01.582326 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-03-28 00:44:01.805150 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2026-03-28 00:44:01.805240 | orchestrator | Set DB devices config data ---------------------------------------------- 0.63s 2026-03-28 00:44:01.805254 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-03-28 00:44:01.805266 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-03-28 00:44:01.805277 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2026-03-28 00:44:01.805288 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.56s 2026-03-28 00:44:24.394544 | orchestrator | 2026-03-28 00:44:24 | INFO  | Task ecbfc891-0ba3-470f-b99f-89c8635ae062 (sync inventory) is running in background. Output coming soon. 2026-03-28 00:44:53.024973 | orchestrator | 2026-03-28 00:44:25 | INFO  | Starting group_vars file reorganization 2026-03-28 00:44:53.025115 | orchestrator | 2026-03-28 00:44:25 | INFO  | Moved 0 file(s) to their respective directories 2026-03-28 00:44:53.025145 | orchestrator | 2026-03-28 00:44:25 | INFO  | Group_vars file reorganization completed 2026-03-28 00:44:53.025165 | orchestrator | 2026-03-28 00:44:28 | INFO  | Starting variable preparation from inventory 2026-03-28 00:44:53.025184 | orchestrator | 2026-03-28 00:44:32 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-28 00:44:53.025205 | orchestrator | 2026-03-28 00:44:32 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-28 00:44:53.025225 | orchestrator | 2026-03-28 00:44:32 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-28 00:44:53.025245 | orchestrator | 2026-03-28 00:44:32 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-28 00:44:53.025264 | orchestrator | 2026-03-28 00:44:32 | INFO  | Variable preparation completed 2026-03-28 00:44:53.025283 | orchestrator | 2026-03-28 00:44:33 | INFO  | Starting inventory overwrite handling 2026-03-28 00:44:53.025303 | orchestrator | 2026-03-28 00:44:33 | INFO  | Handling group overwrites in 99-overwrite 2026-03-28 00:44:53.025325 | orchestrator | 2026-03-28 00:44:33 | INFO  | Removing group frr:children from 60-generic 2026-03-28 00:44:53.025345 | orchestrator | 2026-03-28 00:44:33 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-28 00:44:53.025364 | orchestrator | 2026-03-28 00:44:33 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-28 00:44:53.025383 | orchestrator | 2026-03-28 00:44:33 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-28 00:44:53.025401 | orchestrator | 2026-03-28 00:44:33 | INFO  | Handling group overwrites in 20-roles 2026-03-28 00:44:53.025420 | orchestrator | 2026-03-28 00:44:33 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-28 00:44:53.025479 | orchestrator | 2026-03-28 00:44:33 | INFO  | Removed 5 group(s) in total 2026-03-28 00:44:53.025523 | orchestrator | 2026-03-28 00:44:33 | INFO  | Inventory overwrite handling completed 2026-03-28 00:44:53.025538 | orchestrator | 2026-03-28 00:44:34 | INFO  | Starting merge of inventory files 2026-03-28 00:44:53.025551 | orchestrator | 2026-03-28 00:44:34 | INFO  | Inventory files merged successfully 2026-03-28 00:44:53.025563 | orchestrator | 2026-03-28 00:44:39 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-28 00:44:53.025576 | orchestrator | 2026-03-28 00:44:51 | INFO  | Successfully wrote ClusterShell configuration 2026-03-28 00:44:53.025589 | orchestrator | [master e37ed68] 2026-03-28-00-44 2026-03-28 00:44:53.025603 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-28 00:44:55.471177 | orchestrator | 2026-03-28 00:44:55 | INFO  | Task a97c71b0-d2df-43ca-9262-5c567e3b5579 (ceph-create-lvm-devices) was prepared for execution. 2026-03-28 00:44:55.471285 | orchestrator | 2026-03-28 00:44:55 | INFO  | It takes a moment until task a97c71b0-d2df-43ca-9262-5c567e3b5579 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-28 00:45:08.559873 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 00:45:08.559942 | orchestrator | 2.16.14 2026-03-28 00:45:08.559952 | orchestrator | 2026-03-28 00:45:08.559972 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-28 00:45:08.559981 | orchestrator | 2026-03-28 00:45:08.559989 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:45:08.559997 | orchestrator | Saturday 28 March 2026 00:45:00 +0000 (0:00:00.329) 0:00:00.329 ******** 2026-03-28 00:45:08.560006 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 00:45:08.560014 | orchestrator | 2026-03-28 00:45:08.560022 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:45:08.560030 | orchestrator | Saturday 28 March 2026 00:45:00 +0000 (0:00:00.251) 0:00:00.580 ******** 2026-03-28 00:45:08.560038 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:45:08.560046 | orchestrator | 2026-03-28 00:45:08.560054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560063 | orchestrator | Saturday 28 March 2026 00:45:00 +0000 (0:00:00.292) 0:00:00.872 ******** 2026-03-28 00:45:08.560071 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-28 00:45:08.560079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-28 00:45:08.560087 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-28 00:45:08.560095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-28 00:45:08.560103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-28 00:45:08.560112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-28 00:45:08.560126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-28 00:45:08.560139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-28 00:45:08.560151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-28 00:45:08.560180 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-28 00:45:08.560194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-28 00:45:08.560205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-28 00:45:08.560213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-28 00:45:08.560238 | orchestrator | 2026-03-28 00:45:08.560247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560255 | orchestrator | Saturday 28 March 2026 00:45:01 +0000 (0:00:00.599) 0:00:01.471 ******** 2026-03-28 00:45:08.560263 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.560271 | orchestrator | 2026-03-28 00:45:08.560295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560304 | orchestrator | Saturday 28 March 2026 00:45:01 +0000 (0:00:00.200) 0:00:01.672 ******** 2026-03-28 00:45:08.560311 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.560319 | orchestrator | 2026-03-28 00:45:08.560327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560339 | orchestrator | Saturday 28 March 2026 00:45:01 +0000 (0:00:00.206) 0:00:01.879 ******** 2026-03-28 00:45:08.560347 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.560356 | orchestrator | 2026-03-28 00:45:08.560364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560371 | orchestrator | Saturday 28 March 2026 00:45:01 +0000 (0:00:00.221) 0:00:02.100 ******** 2026-03-28 00:45:08.560379 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.560387 | orchestrator | 2026-03-28 00:45:08.560395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560403 | orchestrator | Saturday 28 March 2026 00:45:02 +0000 (0:00:00.260) 0:00:02.361 ******** 2026-03-28 00:45:08.560411 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.560418 | orchestrator | 2026-03-28 00:45:08.560427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560434 | orchestrator | Saturday 28 March 2026 00:45:02 +0000 (0:00:00.217) 0:00:02.579 ******** 2026-03-28 00:45:08.560442 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.560450 | orchestrator | 2026-03-28 00:45:08.560458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560466 | orchestrator | Saturday 28 March 2026 00:45:02 +0000 (0:00:00.209) 0:00:02.788 ******** 2026-03-28 00:45:08.560474 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.560482 | orchestrator | 2026-03-28 00:45:08.560504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560512 | orchestrator | Saturday 28 March 2026 00:45:02 +0000 (0:00:00.217) 0:00:03.005 ******** 2026-03-28 00:45:08.560520 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.560528 | orchestrator | 2026-03-28 00:45:08.560536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560544 | orchestrator | Saturday 28 March 2026 00:45:03 +0000 (0:00:00.203) 0:00:03.209 ******** 2026-03-28 00:45:08.560551 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22) 2026-03-28 00:45:08.560561 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22) 2026-03-28 00:45:08.560568 | orchestrator | 2026-03-28 00:45:08.560577 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560598 | orchestrator | Saturday 28 March 2026 00:45:03 +0000 (0:00:00.444) 0:00:03.654 ******** 2026-03-28 00:45:08.560607 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9560503a-139c-4329-8ffd-1ea1e0c721e5) 2026-03-28 00:45:08.560615 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9560503a-139c-4329-8ffd-1ea1e0c721e5) 2026-03-28 00:45:08.560623 | orchestrator | 2026-03-28 00:45:08.560631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560639 | orchestrator | Saturday 28 March 2026 00:45:04 +0000 (0:00:00.833) 0:00:04.487 ******** 2026-03-28 00:45:08.560647 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_64213c7d-5962-413c-aa45-2f60eed78f32) 2026-03-28 00:45:08.560654 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_64213c7d-5962-413c-aa45-2f60eed78f32) 2026-03-28 00:45:08.560669 | orchestrator | 2026-03-28 00:45:08.560677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560685 | orchestrator | Saturday 28 March 2026 00:45:05 +0000 (0:00:00.736) 0:00:05.223 ******** 2026-03-28 00:45:08.560692 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_94eace61-73f7-4993-ae2a-02303df71bb3) 2026-03-28 00:45:08.560700 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_94eace61-73f7-4993-ae2a-02303df71bb3) 2026-03-28 00:45:08.560708 | orchestrator | 2026-03-28 00:45:08.560716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:08.560724 | orchestrator | Saturday 28 March 2026 00:45:05 +0000 (0:00:00.889) 0:00:06.112 ******** 2026-03-28 00:45:08.560732 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:45:08.560740 | orchestrator | 2026-03-28 00:45:08.560747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:08.560755 | orchestrator | Saturday 28 March 2026 00:45:06 +0000 (0:00:00.369) 0:00:06.482 ******** 2026-03-28 00:45:08.560763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-28 00:45:08.560771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-28 00:45:08.560779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-28 00:45:08.560786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-28 00:45:08.560794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-28 00:45:08.560802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-28 00:45:08.560810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-28 00:45:08.560818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-28 00:45:08.560825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-28 00:45:08.560833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-28 00:45:08.560841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-28 00:45:08.560849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-28 00:45:08.560856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-28 00:45:08.560864 | orchestrator | 2026-03-28 00:45:08.560872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:08.560880 | orchestrator | Saturday 28 March 2026 00:45:06 +0000 (0:00:00.647) 0:00:07.130 ******** 2026-03-28 00:45:08.560888 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.560896 | orchestrator | 2026-03-28 00:45:08.560903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:08.560911 | orchestrator | Saturday 28 March 2026 00:45:07 +0000 (0:00:00.266) 0:00:07.397 ******** 2026-03-28 00:45:08.560919 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.560927 | orchestrator | 2026-03-28 00:45:08.560935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:08.560942 | orchestrator | Saturday 28 March 2026 00:45:07 +0000 (0:00:00.240) 0:00:07.637 ******** 2026-03-28 00:45:08.560950 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.560958 | orchestrator | 2026-03-28 00:45:08.560966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:08.560974 | orchestrator | Saturday 28 March 2026 00:45:07 +0000 (0:00:00.232) 0:00:07.870 ******** 2026-03-28 00:45:08.560981 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.560994 | orchestrator | 2026-03-28 00:45:08.561002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:08.561010 | orchestrator | Saturday 28 March 2026 00:45:07 +0000 (0:00:00.220) 0:00:08.090 ******** 2026-03-28 00:45:08.561018 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.561025 | orchestrator | 2026-03-28 00:45:08.561033 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:08.561041 | orchestrator | Saturday 28 March 2026 00:45:08 +0000 (0:00:00.222) 0:00:08.313 ******** 2026-03-28 00:45:08.561049 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.561057 | orchestrator | 2026-03-28 00:45:08.561064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:08.561072 | orchestrator | Saturday 28 March 2026 00:45:08 +0000 (0:00:00.211) 0:00:08.525 ******** 2026-03-28 00:45:08.561080 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:08.561088 | orchestrator | 2026-03-28 00:45:08.561100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:17.225043 | orchestrator | Saturday 28 March 2026 00:45:08 +0000 (0:00:00.201) 0:00:08.726 ******** 2026-03-28 00:45:17.225135 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.225148 | orchestrator | 2026-03-28 00:45:17.225158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:17.225168 | orchestrator | Saturday 28 March 2026 00:45:08 +0000 (0:00:00.228) 0:00:08.954 ******** 2026-03-28 00:45:17.225177 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-28 00:45:17.225187 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-28 00:45:17.225196 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-28 00:45:17.225205 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-28 00:45:17.225214 | orchestrator | 2026-03-28 00:45:17.225223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:17.225232 | orchestrator | Saturday 28 March 2026 00:45:09 +0000 (0:00:00.978) 0:00:09.933 ******** 2026-03-28 00:45:17.225240 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.225249 | orchestrator | 2026-03-28 00:45:17.225258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:17.225267 | orchestrator | Saturday 28 March 2026 00:45:09 +0000 (0:00:00.195) 0:00:10.128 ******** 2026-03-28 00:45:17.225276 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.225284 | orchestrator | 2026-03-28 00:45:17.225293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:17.225302 | orchestrator | Saturday 28 March 2026 00:45:10 +0000 (0:00:00.211) 0:00:10.339 ******** 2026-03-28 00:45:17.225312 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.225321 | orchestrator | 2026-03-28 00:45:17.225330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:17.225351 | orchestrator | Saturday 28 March 2026 00:45:10 +0000 (0:00:00.287) 0:00:10.627 ******** 2026-03-28 00:45:17.225361 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.225369 | orchestrator | 2026-03-28 00:45:17.225378 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-28 00:45:17.225387 | orchestrator | Saturday 28 March 2026 00:45:10 +0000 (0:00:00.325) 0:00:10.952 ******** 2026-03-28 00:45:17.225396 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.225405 | orchestrator | 2026-03-28 00:45:17.225414 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-28 00:45:17.225422 | orchestrator | Saturday 28 March 2026 00:45:10 +0000 (0:00:00.140) 0:00:11.093 ******** 2026-03-28 00:45:17.225450 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e282229f-a8c2-5daa-9c69-6eb93429113b'}}) 2026-03-28 00:45:17.225460 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d415d19-3246-5675-b441-c36cba308c79'}}) 2026-03-28 00:45:17.225469 | orchestrator | 2026-03-28 00:45:17.225514 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-28 00:45:17.225548 | orchestrator | Saturday 28 March 2026 00:45:11 +0000 (0:00:00.252) 0:00:11.345 ******** 2026-03-28 00:45:17.225559 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'}) 2026-03-28 00:45:17.225569 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'}) 2026-03-28 00:45:17.225577 | orchestrator | 2026-03-28 00:45:17.225587 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-28 00:45:17.225603 | orchestrator | Saturday 28 March 2026 00:45:13 +0000 (0:00:02.253) 0:00:13.599 ******** 2026-03-28 00:45:17.225613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:17.225625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:17.225635 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.225645 | orchestrator | 2026-03-28 00:45:17.225655 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-28 00:45:17.225665 | orchestrator | Saturday 28 March 2026 00:45:13 +0000 (0:00:00.147) 0:00:13.746 ******** 2026-03-28 00:45:17.225675 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'}) 2026-03-28 00:45:17.225686 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'}) 2026-03-28 00:45:17.225695 | orchestrator | 2026-03-28 00:45:17.225705 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-28 00:45:17.225715 | orchestrator | Saturday 28 March 2026 00:45:15 +0000 (0:00:01.515) 0:00:15.262 ******** 2026-03-28 00:45:17.225725 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:17.225735 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:17.225745 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.225755 | orchestrator | 2026-03-28 00:45:17.225765 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-28 00:45:17.225775 | orchestrator | Saturday 28 March 2026 00:45:15 +0000 (0:00:00.167) 0:00:15.430 ******** 2026-03-28 00:45:17.225801 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.225811 | orchestrator | 2026-03-28 00:45:17.225821 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-28 00:45:17.225831 | orchestrator | Saturday 28 March 2026 00:45:15 +0000 (0:00:00.132) 0:00:15.563 ******** 2026-03-28 00:45:17.225841 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:17.225851 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:17.225861 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.225871 | orchestrator | 2026-03-28 00:45:17.225881 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-28 00:45:17.225891 | orchestrator | Saturday 28 March 2026 00:45:15 +0000 (0:00:00.371) 0:00:15.934 ******** 2026-03-28 00:45:17.225901 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.225916 | orchestrator | 2026-03-28 00:45:17.225933 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-28 00:45:17.225948 | orchestrator | Saturday 28 March 2026 00:45:15 +0000 (0:00:00.162) 0:00:16.097 ******** 2026-03-28 00:45:17.225973 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:17.225988 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:17.226003 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.226085 | orchestrator | 2026-03-28 00:45:17.226111 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-28 00:45:17.226124 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.171) 0:00:16.269 ******** 2026-03-28 00:45:17.226135 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.226146 | orchestrator | 2026-03-28 00:45:17.226157 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-28 00:45:17.226168 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.143) 0:00:16.412 ******** 2026-03-28 00:45:17.226215 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:17.226226 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:17.226237 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.226248 | orchestrator | 2026-03-28 00:45:17.226259 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-28 00:45:17.226270 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.155) 0:00:16.567 ******** 2026-03-28 00:45:17.226280 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:45:17.226292 | orchestrator | 2026-03-28 00:45:17.226303 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-28 00:45:17.226314 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.192) 0:00:16.760 ******** 2026-03-28 00:45:17.226332 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:17.226343 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:17.226354 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.226365 | orchestrator | 2026-03-28 00:45:17.226376 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-28 00:45:17.226387 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.149) 0:00:16.910 ******** 2026-03-28 00:45:17.226398 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:17.226409 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:17.226420 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.226431 | orchestrator | 2026-03-28 00:45:17.226441 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-28 00:45:17.226453 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.171) 0:00:17.081 ******** 2026-03-28 00:45:17.226464 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:17.226501 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:17.226515 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.226526 | orchestrator | 2026-03-28 00:45:17.226537 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-28 00:45:17.226548 | orchestrator | Saturday 28 March 2026 00:45:17 +0000 (0:00:00.160) 0:00:17.242 ******** 2026-03-28 00:45:17.226568 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:17.226579 | orchestrator | 2026-03-28 00:45:17.226590 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-28 00:45:17.226611 | orchestrator | Saturday 28 March 2026 00:45:17 +0000 (0:00:00.149) 0:00:17.391 ******** 2026-03-28 00:45:24.294131 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.294232 | orchestrator | 2026-03-28 00:45:24.294248 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-28 00:45:24.294262 | orchestrator | Saturday 28 March 2026 00:45:17 +0000 (0:00:00.134) 0:00:17.526 ******** 2026-03-28 00:45:24.294274 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.294285 | orchestrator | 2026-03-28 00:45:24.294297 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-28 00:45:24.294308 | orchestrator | Saturday 28 March 2026 00:45:17 +0000 (0:00:00.129) 0:00:17.656 ******** 2026-03-28 00:45:24.294319 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:45:24.294331 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-28 00:45:24.294343 | orchestrator | } 2026-03-28 00:45:24.294354 | orchestrator | 2026-03-28 00:45:24.294366 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-28 00:45:24.294377 | orchestrator | Saturday 28 March 2026 00:45:17 +0000 (0:00:00.390) 0:00:18.046 ******** 2026-03-28 00:45:24.294388 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:45:24.294399 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-28 00:45:24.294410 | orchestrator | } 2026-03-28 00:45:24.294420 | orchestrator | 2026-03-28 00:45:24.294431 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-28 00:45:24.294442 | orchestrator | Saturday 28 March 2026 00:45:18 +0000 (0:00:00.162) 0:00:18.209 ******** 2026-03-28 00:45:24.294453 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:45:24.294533 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-28 00:45:24.294548 | orchestrator | } 2026-03-28 00:45:24.294559 | orchestrator | 2026-03-28 00:45:24.294570 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-28 00:45:24.294582 | orchestrator | Saturday 28 March 2026 00:45:18 +0000 (0:00:00.160) 0:00:18.369 ******** 2026-03-28 00:45:24.294593 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:45:24.294604 | orchestrator | 2026-03-28 00:45:24.294618 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-28 00:45:24.294630 | orchestrator | Saturday 28 March 2026 00:45:18 +0000 (0:00:00.803) 0:00:19.173 ******** 2026-03-28 00:45:24.294643 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:45:24.294655 | orchestrator | 2026-03-28 00:45:24.294668 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-28 00:45:24.294680 | orchestrator | Saturday 28 March 2026 00:45:19 +0000 (0:00:00.539) 0:00:19.713 ******** 2026-03-28 00:45:24.294692 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:45:24.294704 | orchestrator | 2026-03-28 00:45:24.294717 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-28 00:45:24.294729 | orchestrator | Saturday 28 March 2026 00:45:20 +0000 (0:00:00.557) 0:00:20.270 ******** 2026-03-28 00:45:24.294741 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:45:24.294754 | orchestrator | 2026-03-28 00:45:24.294766 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-28 00:45:24.294779 | orchestrator | Saturday 28 March 2026 00:45:20 +0000 (0:00:00.145) 0:00:20.415 ******** 2026-03-28 00:45:24.294792 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.294805 | orchestrator | 2026-03-28 00:45:24.294817 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-28 00:45:24.294829 | orchestrator | Saturday 28 March 2026 00:45:20 +0000 (0:00:00.120) 0:00:20.536 ******** 2026-03-28 00:45:24.294842 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.294854 | orchestrator | 2026-03-28 00:45:24.294867 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-28 00:45:24.294907 | orchestrator | Saturday 28 March 2026 00:45:20 +0000 (0:00:00.124) 0:00:20.660 ******** 2026-03-28 00:45:24.294920 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:45:24.294933 | orchestrator |  "vgs_report": { 2026-03-28 00:45:24.294945 | orchestrator |  "vg": [] 2026-03-28 00:45:24.294956 | orchestrator |  } 2026-03-28 00:45:24.294967 | orchestrator | } 2026-03-28 00:45:24.294978 | orchestrator | 2026-03-28 00:45:24.294989 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-28 00:45:24.295000 | orchestrator | Saturday 28 March 2026 00:45:20 +0000 (0:00:00.140) 0:00:20.801 ******** 2026-03-28 00:45:24.295011 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295022 | orchestrator | 2026-03-28 00:45:24.295050 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-28 00:45:24.295061 | orchestrator | Saturday 28 March 2026 00:45:20 +0000 (0:00:00.129) 0:00:20.930 ******** 2026-03-28 00:45:24.295072 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295083 | orchestrator | 2026-03-28 00:45:24.295094 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-28 00:45:24.295105 | orchestrator | Saturday 28 March 2026 00:45:20 +0000 (0:00:00.133) 0:00:21.064 ******** 2026-03-28 00:45:24.295116 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295127 | orchestrator | 2026-03-28 00:45:24.295138 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-28 00:45:24.295149 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:00.367) 0:00:21.432 ******** 2026-03-28 00:45:24.295160 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295171 | orchestrator | 2026-03-28 00:45:24.295182 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-28 00:45:24.295193 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:00.175) 0:00:21.607 ******** 2026-03-28 00:45:24.295204 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295215 | orchestrator | 2026-03-28 00:45:24.295226 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-28 00:45:24.295237 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:00.172) 0:00:21.779 ******** 2026-03-28 00:45:24.295248 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295259 | orchestrator | 2026-03-28 00:45:24.295270 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-28 00:45:24.295281 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:00.171) 0:00:21.950 ******** 2026-03-28 00:45:24.295292 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295303 | orchestrator | 2026-03-28 00:45:24.295314 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-28 00:45:24.295325 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:00.172) 0:00:22.123 ******** 2026-03-28 00:45:24.295354 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295366 | orchestrator | 2026-03-28 00:45:24.295377 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-28 00:45:24.295388 | orchestrator | Saturday 28 March 2026 00:45:22 +0000 (0:00:00.156) 0:00:22.279 ******** 2026-03-28 00:45:24.295399 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295409 | orchestrator | 2026-03-28 00:45:24.295420 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-28 00:45:24.295431 | orchestrator | Saturday 28 March 2026 00:45:22 +0000 (0:00:00.146) 0:00:22.426 ******** 2026-03-28 00:45:24.295442 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295453 | orchestrator | 2026-03-28 00:45:24.295484 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-28 00:45:24.295496 | orchestrator | Saturday 28 March 2026 00:45:22 +0000 (0:00:00.161) 0:00:22.588 ******** 2026-03-28 00:45:24.295507 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295518 | orchestrator | 2026-03-28 00:45:24.295529 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-28 00:45:24.295540 | orchestrator | Saturday 28 March 2026 00:45:22 +0000 (0:00:00.146) 0:00:22.734 ******** 2026-03-28 00:45:24.295560 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295571 | orchestrator | 2026-03-28 00:45:24.295582 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-28 00:45:24.295593 | orchestrator | Saturday 28 March 2026 00:45:22 +0000 (0:00:00.147) 0:00:22.882 ******** 2026-03-28 00:45:24.295604 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295615 | orchestrator | 2026-03-28 00:45:24.295626 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-28 00:45:24.295637 | orchestrator | Saturday 28 March 2026 00:45:22 +0000 (0:00:00.156) 0:00:23.039 ******** 2026-03-28 00:45:24.295647 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295658 | orchestrator | 2026-03-28 00:45:24.295669 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-28 00:45:24.295680 | orchestrator | Saturday 28 March 2026 00:45:23 +0000 (0:00:00.146) 0:00:23.186 ******** 2026-03-28 00:45:24.295693 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:24.295706 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:24.295717 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295728 | orchestrator | 2026-03-28 00:45:24.295738 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-28 00:45:24.295749 | orchestrator | Saturday 28 March 2026 00:45:23 +0000 (0:00:00.471) 0:00:23.657 ******** 2026-03-28 00:45:24.295761 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:24.295772 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:24.295783 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295794 | orchestrator | 2026-03-28 00:45:24.295805 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-28 00:45:24.295832 | orchestrator | Saturday 28 March 2026 00:45:23 +0000 (0:00:00.165) 0:00:23.822 ******** 2026-03-28 00:45:24.295843 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:24.295855 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:24.295866 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295877 | orchestrator | 2026-03-28 00:45:24.295887 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-28 00:45:24.295898 | orchestrator | Saturday 28 March 2026 00:45:23 +0000 (0:00:00.149) 0:00:23.972 ******** 2026-03-28 00:45:24.295909 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:24.295920 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:24.295931 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.295942 | orchestrator | 2026-03-28 00:45:24.295953 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-28 00:45:24.295964 | orchestrator | Saturday 28 March 2026 00:45:23 +0000 (0:00:00.156) 0:00:24.128 ******** 2026-03-28 00:45:24.295981 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:24.295999 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:24.296027 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:24.296044 | orchestrator | 2026-03-28 00:45:24.296062 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-28 00:45:24.296082 | orchestrator | Saturday 28 March 2026 00:45:24 +0000 (0:00:00.170) 0:00:24.299 ******** 2026-03-28 00:45:24.296111 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:29.712373 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:29.712512 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:29.712531 | orchestrator | 2026-03-28 00:45:29.712545 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-28 00:45:29.712558 | orchestrator | Saturday 28 March 2026 00:45:24 +0000 (0:00:00.162) 0:00:24.462 ******** 2026-03-28 00:45:29.712570 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:29.712581 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:29.712593 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:29.712604 | orchestrator | 2026-03-28 00:45:29.712616 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-28 00:45:29.712636 | orchestrator | Saturday 28 March 2026 00:45:24 +0000 (0:00:00.168) 0:00:24.630 ******** 2026-03-28 00:45:29.712648 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:29.712660 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:29.712671 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:29.712702 | orchestrator | 2026-03-28 00:45:29.712714 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-28 00:45:29.712725 | orchestrator | Saturday 28 March 2026 00:45:24 +0000 (0:00:00.166) 0:00:24.796 ******** 2026-03-28 00:45:29.712735 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:45:29.712747 | orchestrator | 2026-03-28 00:45:29.712762 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-28 00:45:29.712781 | orchestrator | Saturday 28 March 2026 00:45:25 +0000 (0:00:00.550) 0:00:25.347 ******** 2026-03-28 00:45:29.712799 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:45:29.712817 | orchestrator | 2026-03-28 00:45:29.712835 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-28 00:45:29.712854 | orchestrator | Saturday 28 March 2026 00:45:25 +0000 (0:00:00.552) 0:00:25.900 ******** 2026-03-28 00:45:29.712872 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:45:29.712891 | orchestrator | 2026-03-28 00:45:29.712906 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-28 00:45:29.712918 | orchestrator | Saturday 28 March 2026 00:45:25 +0000 (0:00:00.159) 0:00:26.059 ******** 2026-03-28 00:45:29.712931 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'vg_name': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'}) 2026-03-28 00:45:29.712945 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'vg_name': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'}) 2026-03-28 00:45:29.712958 | orchestrator | 2026-03-28 00:45:29.712971 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-28 00:45:29.712983 | orchestrator | Saturday 28 March 2026 00:45:26 +0000 (0:00:00.182) 0:00:26.242 ******** 2026-03-28 00:45:29.712995 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:29.713036 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:29.713050 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:29.713070 | orchestrator | 2026-03-28 00:45:29.713083 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-28 00:45:29.713096 | orchestrator | Saturday 28 March 2026 00:45:26 +0000 (0:00:00.382) 0:00:26.624 ******** 2026-03-28 00:45:29.713108 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:29.713121 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:29.713134 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:29.713164 | orchestrator | 2026-03-28 00:45:29.713185 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-28 00:45:29.713216 | orchestrator | Saturday 28 March 2026 00:45:26 +0000 (0:00:00.169) 0:00:26.794 ******** 2026-03-28 00:45:29.713229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'})  2026-03-28 00:45:29.713242 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'})  2026-03-28 00:45:29.713255 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:29.713265 | orchestrator | 2026-03-28 00:45:29.713276 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-28 00:45:29.713299 | orchestrator | Saturday 28 March 2026 00:45:26 +0000 (0:00:00.163) 0:00:26.957 ******** 2026-03-28 00:45:29.713330 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:45:29.713342 | orchestrator |  "lvm_report": { 2026-03-28 00:45:29.713353 | orchestrator |  "lv": [ 2026-03-28 00:45:29.713364 | orchestrator |  { 2026-03-28 00:45:29.713375 | orchestrator |  "lv_name": "osd-block-1d415d19-3246-5675-b441-c36cba308c79", 2026-03-28 00:45:29.713386 | orchestrator |  "vg_name": "ceph-1d415d19-3246-5675-b441-c36cba308c79" 2026-03-28 00:45:29.713397 | orchestrator |  }, 2026-03-28 00:45:29.713408 | orchestrator |  { 2026-03-28 00:45:29.713419 | orchestrator |  "lv_name": "osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b", 2026-03-28 00:45:29.713429 | orchestrator |  "vg_name": "ceph-e282229f-a8c2-5daa-9c69-6eb93429113b" 2026-03-28 00:45:29.713440 | orchestrator |  } 2026-03-28 00:45:29.713451 | orchestrator |  ], 2026-03-28 00:45:29.713503 | orchestrator |  "pv": [ 2026-03-28 00:45:29.713516 | orchestrator |  { 2026-03-28 00:45:29.713527 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-28 00:45:29.713538 | orchestrator |  "vg_name": "ceph-e282229f-a8c2-5daa-9c69-6eb93429113b" 2026-03-28 00:45:29.713549 | orchestrator |  }, 2026-03-28 00:45:29.713559 | orchestrator |  { 2026-03-28 00:45:29.713570 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-28 00:45:29.713600 | orchestrator |  "vg_name": "ceph-1d415d19-3246-5675-b441-c36cba308c79" 2026-03-28 00:45:29.713611 | orchestrator |  } 2026-03-28 00:45:29.713622 | orchestrator |  ] 2026-03-28 00:45:29.713633 | orchestrator |  } 2026-03-28 00:45:29.713644 | orchestrator | } 2026-03-28 00:45:29.713663 | orchestrator | 2026-03-28 00:45:29.713681 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-28 00:45:29.713699 | orchestrator | 2026-03-28 00:45:29.713715 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:45:29.713733 | orchestrator | Saturday 28 March 2026 00:45:27 +0000 (0:00:00.301) 0:00:27.259 ******** 2026-03-28 00:45:29.713765 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-28 00:45:29.713785 | orchestrator | 2026-03-28 00:45:29.713804 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:45:29.713822 | orchestrator | Saturday 28 March 2026 00:45:27 +0000 (0:00:00.280) 0:00:27.539 ******** 2026-03-28 00:45:29.713841 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:45:29.713853 | orchestrator | 2026-03-28 00:45:29.713864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:29.713875 | orchestrator | Saturday 28 March 2026 00:45:27 +0000 (0:00:00.253) 0:00:27.792 ******** 2026-03-28 00:45:29.713886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-28 00:45:29.713914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-28 00:45:29.713924 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-28 00:45:29.713935 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-28 00:45:29.713946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-28 00:45:29.713956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-28 00:45:29.713967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-28 00:45:29.713984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-28 00:45:29.713996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-28 00:45:29.714006 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-28 00:45:29.714092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-28 00:45:29.714115 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-28 00:45:29.714132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-28 00:45:29.714150 | orchestrator | 2026-03-28 00:45:29.714168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:29.714185 | orchestrator | Saturday 28 March 2026 00:45:28 +0000 (0:00:00.404) 0:00:28.197 ******** 2026-03-28 00:45:29.714202 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:29.714221 | orchestrator | 2026-03-28 00:45:29.714272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:29.714285 | orchestrator | Saturday 28 March 2026 00:45:28 +0000 (0:00:00.188) 0:00:28.385 ******** 2026-03-28 00:45:29.714296 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:29.714306 | orchestrator | 2026-03-28 00:45:29.714317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:29.714328 | orchestrator | Saturday 28 March 2026 00:45:28 +0000 (0:00:00.195) 0:00:28.580 ******** 2026-03-28 00:45:29.714339 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:29.714350 | orchestrator | 2026-03-28 00:45:29.714361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:29.714372 | orchestrator | Saturday 28 March 2026 00:45:29 +0000 (0:00:00.642) 0:00:29.223 ******** 2026-03-28 00:45:29.714383 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:29.714394 | orchestrator | 2026-03-28 00:45:29.714404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:29.714415 | orchestrator | Saturday 28 March 2026 00:45:29 +0000 (0:00:00.225) 0:00:29.449 ******** 2026-03-28 00:45:29.714426 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:29.714437 | orchestrator | 2026-03-28 00:45:29.714448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:29.714486 | orchestrator | Saturday 28 March 2026 00:45:29 +0000 (0:00:00.225) 0:00:29.674 ******** 2026-03-28 00:45:29.714509 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:29.714521 | orchestrator | 2026-03-28 00:45:29.714543 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:41.314764 | orchestrator | Saturday 28 March 2026 00:45:29 +0000 (0:00:00.202) 0:00:29.877 ******** 2026-03-28 00:45:41.314893 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.314923 | orchestrator | 2026-03-28 00:45:41.314943 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:41.314963 | orchestrator | Saturday 28 March 2026 00:45:29 +0000 (0:00:00.200) 0:00:30.077 ******** 2026-03-28 00:45:41.314985 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.315007 | orchestrator | 2026-03-28 00:45:41.315021 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:41.315032 | orchestrator | Saturday 28 March 2026 00:45:30 +0000 (0:00:00.211) 0:00:30.289 ******** 2026-03-28 00:45:41.315044 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7) 2026-03-28 00:45:41.315056 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7) 2026-03-28 00:45:41.315067 | orchestrator | 2026-03-28 00:45:41.315079 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:41.315090 | orchestrator | Saturday 28 March 2026 00:45:30 +0000 (0:00:00.435) 0:00:30.725 ******** 2026-03-28 00:45:41.315101 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4cb6368c-0066-4efd-8388-81f1557a02ca) 2026-03-28 00:45:41.315112 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4cb6368c-0066-4efd-8388-81f1557a02ca) 2026-03-28 00:45:41.315123 | orchestrator | 2026-03-28 00:45:41.315133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:41.315144 | orchestrator | Saturday 28 March 2026 00:45:30 +0000 (0:00:00.427) 0:00:31.152 ******** 2026-03-28 00:45:41.315155 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b9aebbdd-9418-41ff-9099-90b7dcb703f9) 2026-03-28 00:45:41.315168 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b9aebbdd-9418-41ff-9099-90b7dcb703f9) 2026-03-28 00:45:41.315187 | orchestrator | 2026-03-28 00:45:41.315205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:41.315223 | orchestrator | Saturday 28 March 2026 00:45:31 +0000 (0:00:00.425) 0:00:31.578 ******** 2026-03-28 00:45:41.315240 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f8ddcfbb-f935-4942-af25-8ac280f1cc67) 2026-03-28 00:45:41.315259 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f8ddcfbb-f935-4942-af25-8ac280f1cc67) 2026-03-28 00:45:41.315275 | orchestrator | 2026-03-28 00:45:41.315292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:41.315308 | orchestrator | Saturday 28 March 2026 00:45:32 +0000 (0:00:00.676) 0:00:32.255 ******** 2026-03-28 00:45:41.315325 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:45:41.315343 | orchestrator | 2026-03-28 00:45:41.315358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.315376 | orchestrator | Saturday 28 March 2026 00:45:32 +0000 (0:00:00.580) 0:00:32.835 ******** 2026-03-28 00:45:41.315418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-28 00:45:41.315441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-28 00:45:41.315493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-28 00:45:41.315512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-28 00:45:41.315533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-28 00:45:41.315551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-28 00:45:41.315601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-28 00:45:41.315614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-28 00:45:41.315624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-28 00:45:41.315635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-28 00:45:41.315646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-28 00:45:41.315656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-28 00:45:41.315667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-28 00:45:41.315678 | orchestrator | 2026-03-28 00:45:41.315689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.315699 | orchestrator | Saturday 28 March 2026 00:45:33 +0000 (0:00:00.904) 0:00:33.740 ******** 2026-03-28 00:45:41.315710 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.315721 | orchestrator | 2026-03-28 00:45:41.315731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.315743 | orchestrator | Saturday 28 March 2026 00:45:33 +0000 (0:00:00.221) 0:00:33.961 ******** 2026-03-28 00:45:41.315754 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.315764 | orchestrator | 2026-03-28 00:45:41.315775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.315786 | orchestrator | Saturday 28 March 2026 00:45:33 +0000 (0:00:00.199) 0:00:34.160 ******** 2026-03-28 00:45:41.315797 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.315808 | orchestrator | 2026-03-28 00:45:41.315841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.315852 | orchestrator | Saturday 28 March 2026 00:45:34 +0000 (0:00:00.239) 0:00:34.400 ******** 2026-03-28 00:45:41.315863 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.315874 | orchestrator | 2026-03-28 00:45:41.315885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.315896 | orchestrator | Saturday 28 March 2026 00:45:34 +0000 (0:00:00.203) 0:00:34.604 ******** 2026-03-28 00:45:41.315906 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.315917 | orchestrator | 2026-03-28 00:45:41.315928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.315938 | orchestrator | Saturday 28 March 2026 00:45:34 +0000 (0:00:00.221) 0:00:34.825 ******** 2026-03-28 00:45:41.315949 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.315959 | orchestrator | 2026-03-28 00:45:41.315971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.315981 | orchestrator | Saturday 28 March 2026 00:45:34 +0000 (0:00:00.222) 0:00:35.048 ******** 2026-03-28 00:45:41.315992 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.316002 | orchestrator | 2026-03-28 00:45:41.316013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.316024 | orchestrator | Saturday 28 March 2026 00:45:35 +0000 (0:00:00.202) 0:00:35.251 ******** 2026-03-28 00:45:41.316035 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.316045 | orchestrator | 2026-03-28 00:45:41.316056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.316067 | orchestrator | Saturday 28 March 2026 00:45:35 +0000 (0:00:00.213) 0:00:35.465 ******** 2026-03-28 00:45:41.316078 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-28 00:45:41.316089 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-28 00:45:41.316100 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-28 00:45:41.316111 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-28 00:45:41.316122 | orchestrator | 2026-03-28 00:45:41.316133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.316152 | orchestrator | Saturday 28 March 2026 00:45:36 +0000 (0:00:00.858) 0:00:36.323 ******** 2026-03-28 00:45:41.316164 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.316174 | orchestrator | 2026-03-28 00:45:41.316185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.316196 | orchestrator | Saturday 28 March 2026 00:45:36 +0000 (0:00:00.196) 0:00:36.520 ******** 2026-03-28 00:45:41.316206 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.316217 | orchestrator | 2026-03-28 00:45:41.316228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.316238 | orchestrator | Saturday 28 March 2026 00:45:37 +0000 (0:00:00.672) 0:00:37.192 ******** 2026-03-28 00:45:41.316249 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.316260 | orchestrator | 2026-03-28 00:45:41.316270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:41.316281 | orchestrator | Saturday 28 March 2026 00:45:37 +0000 (0:00:00.246) 0:00:37.438 ******** 2026-03-28 00:45:41.316292 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.316302 | orchestrator | 2026-03-28 00:45:41.316314 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-28 00:45:41.316326 | orchestrator | Saturday 28 March 2026 00:45:37 +0000 (0:00:00.206) 0:00:37.644 ******** 2026-03-28 00:45:41.316336 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.316347 | orchestrator | 2026-03-28 00:45:41.316358 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-28 00:45:41.316369 | orchestrator | Saturday 28 March 2026 00:45:37 +0000 (0:00:00.138) 0:00:37.783 ******** 2026-03-28 00:45:41.316379 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de32c164-f4a0-5092-ad33-650515756f9d'}}) 2026-03-28 00:45:41.316391 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '65811f0f-7bf7-557a-9618-106707fc2899'}}) 2026-03-28 00:45:41.316402 | orchestrator | 2026-03-28 00:45:41.316412 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-28 00:45:41.316423 | orchestrator | Saturday 28 March 2026 00:45:37 +0000 (0:00:00.197) 0:00:37.980 ******** 2026-03-28 00:45:41.316435 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'}) 2026-03-28 00:45:41.316508 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'}) 2026-03-28 00:45:41.316521 | orchestrator | 2026-03-28 00:45:41.316533 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-28 00:45:41.316543 | orchestrator | Saturday 28 March 2026 00:45:39 +0000 (0:00:01.984) 0:00:39.964 ******** 2026-03-28 00:45:41.316554 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:41.316566 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:41.316586 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:41.316603 | orchestrator | 2026-03-28 00:45:41.316621 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-28 00:45:41.316638 | orchestrator | Saturday 28 March 2026 00:45:39 +0000 (0:00:00.147) 0:00:40.111 ******** 2026-03-28 00:45:41.316658 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'}) 2026-03-28 00:45:41.316686 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'}) 2026-03-28 00:45:47.324612 | orchestrator | 2026-03-28 00:45:47.324750 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-28 00:45:47.324814 | orchestrator | Saturday 28 March 2026 00:45:41 +0000 (0:00:01.367) 0:00:41.479 ******** 2026-03-28 00:45:47.324856 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:47.324880 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:47.324899 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.324919 | orchestrator | 2026-03-28 00:45:47.324939 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-28 00:45:47.324958 | orchestrator | Saturday 28 March 2026 00:45:41 +0000 (0:00:00.165) 0:00:41.644 ******** 2026-03-28 00:45:47.324973 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.324985 | orchestrator | 2026-03-28 00:45:47.324996 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-28 00:45:47.325008 | orchestrator | Saturday 28 March 2026 00:45:41 +0000 (0:00:00.141) 0:00:41.786 ******** 2026-03-28 00:45:47.325027 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:47.325046 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:47.325065 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.325085 | orchestrator | 2026-03-28 00:45:47.325105 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-28 00:45:47.325123 | orchestrator | Saturday 28 March 2026 00:45:41 +0000 (0:00:00.175) 0:00:41.962 ******** 2026-03-28 00:45:47.325135 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.325148 | orchestrator | 2026-03-28 00:45:47.325164 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-28 00:45:47.325182 | orchestrator | Saturday 28 March 2026 00:45:41 +0000 (0:00:00.156) 0:00:42.118 ******** 2026-03-28 00:45:47.325203 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:47.325224 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:47.325245 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.325263 | orchestrator | 2026-03-28 00:45:47.325284 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-28 00:45:47.325311 | orchestrator | Saturday 28 March 2026 00:45:42 +0000 (0:00:00.406) 0:00:42.524 ******** 2026-03-28 00:45:47.325333 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.325353 | orchestrator | 2026-03-28 00:45:47.325368 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-28 00:45:47.325381 | orchestrator | Saturday 28 March 2026 00:45:42 +0000 (0:00:00.138) 0:00:42.662 ******** 2026-03-28 00:45:47.325393 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:47.325406 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:47.325418 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.325430 | orchestrator | 2026-03-28 00:45:47.325476 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-28 00:45:47.325488 | orchestrator | Saturday 28 March 2026 00:45:42 +0000 (0:00:00.191) 0:00:42.854 ******** 2026-03-28 00:45:47.325499 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:45:47.325511 | orchestrator | 2026-03-28 00:45:47.325521 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-28 00:45:47.325553 | orchestrator | Saturday 28 March 2026 00:45:42 +0000 (0:00:00.155) 0:00:43.009 ******** 2026-03-28 00:45:47.325573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:47.325593 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:47.325614 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.325634 | orchestrator | 2026-03-28 00:45:47.325656 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-28 00:45:47.325675 | orchestrator | Saturday 28 March 2026 00:45:42 +0000 (0:00:00.157) 0:00:43.167 ******** 2026-03-28 00:45:47.325688 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:47.325708 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:47.325725 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.325742 | orchestrator | 2026-03-28 00:45:47.325760 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-28 00:45:47.325805 | orchestrator | Saturday 28 March 2026 00:45:43 +0000 (0:00:00.158) 0:00:43.326 ******** 2026-03-28 00:45:47.325825 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:47.325843 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:47.325862 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.325881 | orchestrator | 2026-03-28 00:45:47.325900 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-28 00:45:47.325913 | orchestrator | Saturday 28 March 2026 00:45:43 +0000 (0:00:00.140) 0:00:43.466 ******** 2026-03-28 00:45:47.325924 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.325934 | orchestrator | 2026-03-28 00:45:47.325945 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-28 00:45:47.325956 | orchestrator | Saturday 28 March 2026 00:45:43 +0000 (0:00:00.147) 0:00:43.613 ******** 2026-03-28 00:45:47.325967 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.325978 | orchestrator | 2026-03-28 00:45:47.325988 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-28 00:45:47.325999 | orchestrator | Saturday 28 March 2026 00:45:43 +0000 (0:00:00.165) 0:00:43.779 ******** 2026-03-28 00:45:47.326010 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.326091 | orchestrator | 2026-03-28 00:45:47.326102 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-28 00:45:47.326113 | orchestrator | Saturday 28 March 2026 00:45:43 +0000 (0:00:00.148) 0:00:43.927 ******** 2026-03-28 00:45:47.326124 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:45:47.326135 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-28 00:45:47.326146 | orchestrator | } 2026-03-28 00:45:47.326158 | orchestrator | 2026-03-28 00:45:47.326168 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-28 00:45:47.326179 | orchestrator | Saturday 28 March 2026 00:45:43 +0000 (0:00:00.164) 0:00:44.092 ******** 2026-03-28 00:45:47.326190 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:45:47.326201 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-28 00:45:47.326212 | orchestrator | } 2026-03-28 00:45:47.326223 | orchestrator | 2026-03-28 00:45:47.326234 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-28 00:45:47.326245 | orchestrator | Saturday 28 March 2026 00:45:44 +0000 (0:00:00.176) 0:00:44.268 ******** 2026-03-28 00:45:47.326268 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:45:47.326279 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-28 00:45:47.326290 | orchestrator | } 2026-03-28 00:45:47.326301 | orchestrator | 2026-03-28 00:45:47.326312 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-28 00:45:47.326323 | orchestrator | Saturday 28 March 2026 00:45:44 +0000 (0:00:00.396) 0:00:44.664 ******** 2026-03-28 00:45:47.326334 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:45:47.326345 | orchestrator | 2026-03-28 00:45:47.326356 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-28 00:45:47.326374 | orchestrator | Saturday 28 March 2026 00:45:45 +0000 (0:00:00.549) 0:00:45.214 ******** 2026-03-28 00:45:47.326385 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:45:47.326396 | orchestrator | 2026-03-28 00:45:47.326407 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-28 00:45:47.326418 | orchestrator | Saturday 28 March 2026 00:45:45 +0000 (0:00:00.555) 0:00:45.769 ******** 2026-03-28 00:45:47.326429 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:45:47.326470 | orchestrator | 2026-03-28 00:45:47.326488 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-28 00:45:47.326499 | orchestrator | Saturday 28 March 2026 00:45:46 +0000 (0:00:00.530) 0:00:46.300 ******** 2026-03-28 00:45:47.326510 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:45:47.326522 | orchestrator | 2026-03-28 00:45:47.326533 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-28 00:45:47.326544 | orchestrator | Saturday 28 March 2026 00:45:46 +0000 (0:00:00.152) 0:00:46.452 ******** 2026-03-28 00:45:47.326555 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.326566 | orchestrator | 2026-03-28 00:45:47.326577 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-28 00:45:47.326588 | orchestrator | Saturday 28 March 2026 00:45:46 +0000 (0:00:00.153) 0:00:46.606 ******** 2026-03-28 00:45:47.326599 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.326610 | orchestrator | 2026-03-28 00:45:47.326621 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-28 00:45:47.326632 | orchestrator | Saturday 28 March 2026 00:45:46 +0000 (0:00:00.128) 0:00:46.734 ******** 2026-03-28 00:45:47.326643 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:45:47.326653 | orchestrator |  "vgs_report": { 2026-03-28 00:45:47.326665 | orchestrator |  "vg": [] 2026-03-28 00:45:47.326676 | orchestrator |  } 2026-03-28 00:45:47.326689 | orchestrator | } 2026-03-28 00:45:47.326708 | orchestrator | 2026-03-28 00:45:47.326726 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-28 00:45:47.326744 | orchestrator | Saturday 28 March 2026 00:45:46 +0000 (0:00:00.176) 0:00:46.911 ******** 2026-03-28 00:45:47.326763 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.326782 | orchestrator | 2026-03-28 00:45:47.326800 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-28 00:45:47.326814 | orchestrator | Saturday 28 March 2026 00:45:46 +0000 (0:00:00.148) 0:00:47.059 ******** 2026-03-28 00:45:47.326825 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.326836 | orchestrator | 2026-03-28 00:45:47.326847 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-28 00:45:47.326858 | orchestrator | Saturday 28 March 2026 00:45:47 +0000 (0:00:00.154) 0:00:47.214 ******** 2026-03-28 00:45:47.326868 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.326879 | orchestrator | 2026-03-28 00:45:47.326890 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-28 00:45:47.326901 | orchestrator | Saturday 28 March 2026 00:45:47 +0000 (0:00:00.153) 0:00:47.368 ******** 2026-03-28 00:45:47.326912 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:47.326929 | orchestrator | 2026-03-28 00:45:47.326960 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-28 00:45:52.490152 | orchestrator | Saturday 28 March 2026 00:45:47 +0000 (0:00:00.120) 0:00:47.488 ******** 2026-03-28 00:45:52.490338 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490353 | orchestrator | 2026-03-28 00:45:52.490362 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-28 00:45:52.490370 | orchestrator | Saturday 28 March 2026 00:45:47 +0000 (0:00:00.392) 0:00:47.881 ******** 2026-03-28 00:45:52.490378 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490386 | orchestrator | 2026-03-28 00:45:52.490395 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-28 00:45:52.490403 | orchestrator | Saturday 28 March 2026 00:45:47 +0000 (0:00:00.153) 0:00:48.034 ******** 2026-03-28 00:45:52.490410 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490418 | orchestrator | 2026-03-28 00:45:52.490426 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-28 00:45:52.490471 | orchestrator | Saturday 28 March 2026 00:45:48 +0000 (0:00:00.152) 0:00:48.187 ******** 2026-03-28 00:45:52.490479 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490487 | orchestrator | 2026-03-28 00:45:52.490495 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-28 00:45:52.490504 | orchestrator | Saturday 28 March 2026 00:45:48 +0000 (0:00:00.154) 0:00:48.342 ******** 2026-03-28 00:45:52.490512 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490520 | orchestrator | 2026-03-28 00:45:52.490528 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-28 00:45:52.490536 | orchestrator | Saturday 28 March 2026 00:45:48 +0000 (0:00:00.151) 0:00:48.493 ******** 2026-03-28 00:45:52.490544 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490552 | orchestrator | 2026-03-28 00:45:52.490559 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-28 00:45:52.490567 | orchestrator | Saturday 28 March 2026 00:45:48 +0000 (0:00:00.140) 0:00:48.633 ******** 2026-03-28 00:45:52.490575 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490583 | orchestrator | 2026-03-28 00:45:52.490591 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-28 00:45:52.490599 | orchestrator | Saturday 28 March 2026 00:45:48 +0000 (0:00:00.160) 0:00:48.794 ******** 2026-03-28 00:45:52.490607 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490615 | orchestrator | 2026-03-28 00:45:52.490623 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-28 00:45:52.490631 | orchestrator | Saturday 28 March 2026 00:45:48 +0000 (0:00:00.152) 0:00:48.947 ******** 2026-03-28 00:45:52.490638 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490646 | orchestrator | 2026-03-28 00:45:52.490656 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-28 00:45:52.490665 | orchestrator | Saturday 28 March 2026 00:45:48 +0000 (0:00:00.184) 0:00:49.132 ******** 2026-03-28 00:45:52.490674 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490684 | orchestrator | 2026-03-28 00:45:52.490693 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-28 00:45:52.490702 | orchestrator | Saturday 28 March 2026 00:45:49 +0000 (0:00:00.163) 0:00:49.295 ******** 2026-03-28 00:45:52.490712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:52.490724 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:52.490733 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490742 | orchestrator | 2026-03-28 00:45:52.490751 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-28 00:45:52.490760 | orchestrator | Saturday 28 March 2026 00:45:49 +0000 (0:00:00.169) 0:00:49.464 ******** 2026-03-28 00:45:52.490769 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:52.490785 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:52.490794 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490803 | orchestrator | 2026-03-28 00:45:52.490811 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-28 00:45:52.490821 | orchestrator | Saturday 28 March 2026 00:45:49 +0000 (0:00:00.183) 0:00:49.647 ******** 2026-03-28 00:45:52.490830 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:52.490838 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:52.490847 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490856 | orchestrator | 2026-03-28 00:45:52.490869 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-28 00:45:52.490883 | orchestrator | Saturday 28 March 2026 00:45:49 +0000 (0:00:00.367) 0:00:50.015 ******** 2026-03-28 00:45:52.490894 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:52.490904 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:52.490913 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490922 | orchestrator | 2026-03-28 00:45:52.490948 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-28 00:45:52.490958 | orchestrator | Saturday 28 March 2026 00:45:50 +0000 (0:00:00.163) 0:00:50.179 ******** 2026-03-28 00:45:52.490967 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:52.490976 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:52.490985 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.490994 | orchestrator | 2026-03-28 00:45:52.491003 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-28 00:45:52.491013 | orchestrator | Saturday 28 March 2026 00:45:50 +0000 (0:00:00.165) 0:00:50.345 ******** 2026-03-28 00:45:52.491022 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:52.491030 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:52.491038 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.491046 | orchestrator | 2026-03-28 00:45:52.491054 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-28 00:45:52.491062 | orchestrator | Saturday 28 March 2026 00:45:50 +0000 (0:00:00.149) 0:00:50.494 ******** 2026-03-28 00:45:52.491107 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:52.491116 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:52.491125 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.491132 | orchestrator | 2026-03-28 00:45:52.491140 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-28 00:45:52.491148 | orchestrator | Saturday 28 March 2026 00:45:50 +0000 (0:00:00.172) 0:00:50.667 ******** 2026-03-28 00:45:52.491156 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:52.491169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:52.491181 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.491189 | orchestrator | 2026-03-28 00:45:52.491197 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-28 00:45:52.491205 | orchestrator | Saturday 28 March 2026 00:45:50 +0000 (0:00:00.162) 0:00:50.829 ******** 2026-03-28 00:45:52.491213 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:45:52.491221 | orchestrator | 2026-03-28 00:45:52.491228 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-28 00:45:52.491236 | orchestrator | Saturday 28 March 2026 00:45:51 +0000 (0:00:00.591) 0:00:51.421 ******** 2026-03-28 00:45:52.491244 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:45:52.491252 | orchestrator | 2026-03-28 00:45:52.491259 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-28 00:45:52.491267 | orchestrator | Saturday 28 March 2026 00:45:51 +0000 (0:00:00.564) 0:00:51.985 ******** 2026-03-28 00:45:52.491275 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:45:52.491283 | orchestrator | 2026-03-28 00:45:52.491290 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-28 00:45:52.491298 | orchestrator | Saturday 28 March 2026 00:45:51 +0000 (0:00:00.149) 0:00:52.135 ******** 2026-03-28 00:45:52.491306 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'vg_name': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'}) 2026-03-28 00:45:52.491316 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'vg_name': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'}) 2026-03-28 00:45:52.491324 | orchestrator | 2026-03-28 00:45:52.491331 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-28 00:45:52.491339 | orchestrator | Saturday 28 March 2026 00:45:52 +0000 (0:00:00.176) 0:00:52.312 ******** 2026-03-28 00:45:52.491347 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:52.491355 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:52.491362 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:52.491370 | orchestrator | 2026-03-28 00:45:52.491378 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-28 00:45:52.491386 | orchestrator | Saturday 28 March 2026 00:45:52 +0000 (0:00:00.184) 0:00:52.496 ******** 2026-03-28 00:45:52.491394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:52.491407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:58.825683 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:58.825853 | orchestrator | 2026-03-28 00:45:58.825874 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-28 00:45:58.825888 | orchestrator | Saturday 28 March 2026 00:45:52 +0000 (0:00:00.156) 0:00:52.652 ******** 2026-03-28 00:45:58.825899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'})  2026-03-28 00:45:58.825913 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'})  2026-03-28 00:45:58.825924 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:58.825935 | orchestrator | 2026-03-28 00:45:58.825947 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-28 00:45:58.825990 | orchestrator | Saturday 28 March 2026 00:45:52 +0000 (0:00:00.151) 0:00:52.804 ******** 2026-03-28 00:45:58.826004 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:45:58.826076 | orchestrator |  "lvm_report": { 2026-03-28 00:45:58.826090 | orchestrator |  "lv": [ 2026-03-28 00:45:58.826109 | orchestrator |  { 2026-03-28 00:45:58.826122 | orchestrator |  "lv_name": "osd-block-65811f0f-7bf7-557a-9618-106707fc2899", 2026-03-28 00:45:58.826135 | orchestrator |  "vg_name": "ceph-65811f0f-7bf7-557a-9618-106707fc2899" 2026-03-28 00:45:58.826198 | orchestrator |  }, 2026-03-28 00:45:58.826218 | orchestrator |  { 2026-03-28 00:45:58.826236 | orchestrator |  "lv_name": "osd-block-de32c164-f4a0-5092-ad33-650515756f9d", 2026-03-28 00:45:58.826256 | orchestrator |  "vg_name": "ceph-de32c164-f4a0-5092-ad33-650515756f9d" 2026-03-28 00:45:58.826273 | orchestrator |  } 2026-03-28 00:45:58.826293 | orchestrator |  ], 2026-03-28 00:45:58.826315 | orchestrator |  "pv": [ 2026-03-28 00:45:58.826334 | orchestrator |  { 2026-03-28 00:45:58.826351 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-28 00:45:58.826365 | orchestrator |  "vg_name": "ceph-de32c164-f4a0-5092-ad33-650515756f9d" 2026-03-28 00:45:58.826378 | orchestrator |  }, 2026-03-28 00:45:58.826390 | orchestrator |  { 2026-03-28 00:45:58.826403 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-28 00:45:58.826415 | orchestrator |  "vg_name": "ceph-65811f0f-7bf7-557a-9618-106707fc2899" 2026-03-28 00:45:58.826521 | orchestrator |  } 2026-03-28 00:45:58.826534 | orchestrator |  ] 2026-03-28 00:45:58.826547 | orchestrator |  } 2026-03-28 00:45:58.826560 | orchestrator | } 2026-03-28 00:45:58.826572 | orchestrator | 2026-03-28 00:45:58.826585 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-28 00:45:58.826596 | orchestrator | 2026-03-28 00:45:58.826607 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:45:58.826618 | orchestrator | Saturday 28 March 2026 00:45:53 +0000 (0:00:00.514) 0:00:53.318 ******** 2026-03-28 00:45:58.826645 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-28 00:45:58.826657 | orchestrator | 2026-03-28 00:45:58.826667 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:45:58.826679 | orchestrator | Saturday 28 March 2026 00:45:53 +0000 (0:00:00.274) 0:00:53.593 ******** 2026-03-28 00:45:58.826690 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:45:58.826701 | orchestrator | 2026-03-28 00:45:58.826712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.826723 | orchestrator | Saturday 28 March 2026 00:45:53 +0000 (0:00:00.278) 0:00:53.872 ******** 2026-03-28 00:45:58.826734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-28 00:45:58.826744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-28 00:45:58.826755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-28 00:45:58.826766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-28 00:45:58.826777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-28 00:45:58.826787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-28 00:45:58.826798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-28 00:45:58.826809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-28 00:45:58.826819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-28 00:45:58.826830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-28 00:45:58.826858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-28 00:45:58.826869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-28 00:45:58.826880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-28 00:45:58.826891 | orchestrator | 2026-03-28 00:45:58.826902 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.826917 | orchestrator | Saturday 28 March 2026 00:45:54 +0000 (0:00:00.462) 0:00:54.335 ******** 2026-03-28 00:45:58.826928 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:58.826939 | orchestrator | 2026-03-28 00:45:58.826950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.826960 | orchestrator | Saturday 28 March 2026 00:45:54 +0000 (0:00:00.219) 0:00:54.555 ******** 2026-03-28 00:45:58.826971 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:58.826982 | orchestrator | 2026-03-28 00:45:58.826993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.827027 | orchestrator | Saturday 28 March 2026 00:45:54 +0000 (0:00:00.228) 0:00:54.783 ******** 2026-03-28 00:45:58.827048 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:58.827059 | orchestrator | 2026-03-28 00:45:58.827070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.827081 | orchestrator | Saturday 28 March 2026 00:45:54 +0000 (0:00:00.186) 0:00:54.970 ******** 2026-03-28 00:45:58.827092 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:58.827102 | orchestrator | 2026-03-28 00:45:58.827113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.827124 | orchestrator | Saturday 28 March 2026 00:45:54 +0000 (0:00:00.197) 0:00:55.167 ******** 2026-03-28 00:45:58.827135 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:58.827145 | orchestrator | 2026-03-28 00:45:58.827156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.827167 | orchestrator | Saturday 28 March 2026 00:45:55 +0000 (0:00:00.664) 0:00:55.832 ******** 2026-03-28 00:45:58.827178 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:58.827188 | orchestrator | 2026-03-28 00:45:58.827199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.827210 | orchestrator | Saturday 28 March 2026 00:45:55 +0000 (0:00:00.210) 0:00:56.042 ******** 2026-03-28 00:45:58.827221 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:58.827239 | orchestrator | 2026-03-28 00:45:58.827257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.827285 | orchestrator | Saturday 28 March 2026 00:45:56 +0000 (0:00:00.221) 0:00:56.263 ******** 2026-03-28 00:45:58.827296 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:58.827306 | orchestrator | 2026-03-28 00:45:58.827317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.827328 | orchestrator | Saturday 28 March 2026 00:45:56 +0000 (0:00:00.207) 0:00:56.470 ******** 2026-03-28 00:45:58.827339 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01) 2026-03-28 00:45:58.827352 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01) 2026-03-28 00:45:58.827363 | orchestrator | 2026-03-28 00:45:58.827373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.827384 | orchestrator | Saturday 28 March 2026 00:45:56 +0000 (0:00:00.413) 0:00:56.884 ******** 2026-03-28 00:45:58.827395 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d59a946d-61ee-4c80-a151-abde4d1a3094) 2026-03-28 00:45:58.827406 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d59a946d-61ee-4c80-a151-abde4d1a3094) 2026-03-28 00:45:58.827416 | orchestrator | 2026-03-28 00:45:58.827472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.827498 | orchestrator | Saturday 28 March 2026 00:45:57 +0000 (0:00:00.437) 0:00:57.321 ******** 2026-03-28 00:45:58.827510 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_adec6741-41cb-49e2-9389-e6d1302151a0) 2026-03-28 00:45:58.827520 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_adec6741-41cb-49e2-9389-e6d1302151a0) 2026-03-28 00:45:58.827531 | orchestrator | 2026-03-28 00:45:58.827542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.827553 | orchestrator | Saturday 28 March 2026 00:45:57 +0000 (0:00:00.447) 0:00:57.769 ******** 2026-03-28 00:45:58.827564 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_86e8f6ba-fcdd-41b8-9839-c0061159d97d) 2026-03-28 00:45:58.827575 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_86e8f6ba-fcdd-41b8-9839-c0061159d97d) 2026-03-28 00:45:58.827585 | orchestrator | 2026-03-28 00:45:58.827596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:58.827607 | orchestrator | Saturday 28 March 2026 00:45:58 +0000 (0:00:00.427) 0:00:58.196 ******** 2026-03-28 00:45:58.827618 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:45:58.827629 | orchestrator | 2026-03-28 00:45:58.827640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:58.827650 | orchestrator | Saturday 28 March 2026 00:45:58 +0000 (0:00:00.346) 0:00:58.542 ******** 2026-03-28 00:45:58.827661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-28 00:45:58.827672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-28 00:45:58.827682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-28 00:45:58.827700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-28 00:45:58.827713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-28 00:45:58.827724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-28 00:45:58.827737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-28 00:45:58.827755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-28 00:45:58.827766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-28 00:45:58.827776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-28 00:45:58.827787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-28 00:45:58.827806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-28 00:46:07.605143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-28 00:46:07.605225 | orchestrator | 2026-03-28 00:46:07.605240 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:46:07.605252 | orchestrator | Saturday 28 March 2026 00:45:58 +0000 (0:00:00.438) 0:00:58.981 ******** 2026-03-28 00:46:07.605263 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.605275 | orchestrator | 2026-03-28 00:46:07.605286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:46:07.605298 | orchestrator | Saturday 28 March 2026 00:45:59 +0000 (0:00:00.211) 0:00:59.193 ******** 2026-03-28 00:46:07.605308 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.605320 | orchestrator | 2026-03-28 00:46:07.605331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:46:07.605342 | orchestrator | Saturday 28 March 2026 00:45:59 +0000 (0:00:00.677) 0:00:59.871 ******** 2026-03-28 00:46:07.605353 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.605390 | orchestrator | 2026-03-28 00:46:07.605435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:46:07.605457 | orchestrator | Saturday 28 March 2026 00:45:59 +0000 (0:00:00.214) 0:01:00.086 ******** 2026-03-28 00:46:07.605477 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.605495 | orchestrator | 2026-03-28 00:46:07.605515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:46:07.605533 | orchestrator | Saturday 28 March 2026 00:46:00 +0000 (0:00:00.206) 0:01:00.293 ******** 2026-03-28 00:46:07.605553 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.605572 | orchestrator | 2026-03-28 00:46:07.605592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:46:07.605610 | orchestrator | Saturday 28 March 2026 00:46:00 +0000 (0:00:00.210) 0:01:00.503 ******** 2026-03-28 00:46:07.605627 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.605647 | orchestrator | 2026-03-28 00:46:07.605666 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:46:07.605686 | orchestrator | Saturday 28 March 2026 00:46:00 +0000 (0:00:00.224) 0:01:00.728 ******** 2026-03-28 00:46:07.605706 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.605719 | orchestrator | 2026-03-28 00:46:07.605732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:46:07.605745 | orchestrator | Saturday 28 March 2026 00:46:00 +0000 (0:00:00.238) 0:01:00.966 ******** 2026-03-28 00:46:07.605757 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.605770 | orchestrator | 2026-03-28 00:46:07.605782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:46:07.605799 | orchestrator | Saturday 28 March 2026 00:46:00 +0000 (0:00:00.186) 0:01:01.153 ******** 2026-03-28 00:46:07.605819 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-28 00:46:07.605839 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-28 00:46:07.605860 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-28 00:46:07.605879 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-28 00:46:07.605900 | orchestrator | 2026-03-28 00:46:07.605921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:46:07.605941 | orchestrator | Saturday 28 March 2026 00:46:01 +0000 (0:00:00.660) 0:01:01.814 ******** 2026-03-28 00:46:07.605961 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.605981 | orchestrator | 2026-03-28 00:46:07.606000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:46:07.606082 | orchestrator | Saturday 28 March 2026 00:46:01 +0000 (0:00:00.196) 0:01:02.010 ******** 2026-03-28 00:46:07.606094 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.606105 | orchestrator | 2026-03-28 00:46:07.606116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:46:07.606127 | orchestrator | Saturday 28 March 2026 00:46:02 +0000 (0:00:00.204) 0:01:02.215 ******** 2026-03-28 00:46:07.606138 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.606149 | orchestrator | 2026-03-28 00:46:07.606160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:46:07.606170 | orchestrator | Saturday 28 March 2026 00:46:02 +0000 (0:00:00.187) 0:01:02.402 ******** 2026-03-28 00:46:07.606181 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.606192 | orchestrator | 2026-03-28 00:46:07.606202 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-28 00:46:07.606213 | orchestrator | Saturday 28 March 2026 00:46:02 +0000 (0:00:00.215) 0:01:02.617 ******** 2026-03-28 00:46:07.606224 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.606234 | orchestrator | 2026-03-28 00:46:07.606246 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-28 00:46:07.606256 | orchestrator | Saturday 28 March 2026 00:46:02 +0000 (0:00:00.336) 0:01:02.954 ******** 2026-03-28 00:46:07.606267 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8b5a6aab-ec84-598a-adc7-d040a5844549'}}) 2026-03-28 00:46:07.606294 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'}}) 2026-03-28 00:46:07.606314 | orchestrator | 2026-03-28 00:46:07.606336 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-28 00:46:07.606358 | orchestrator | Saturday 28 March 2026 00:46:02 +0000 (0:00:00.195) 0:01:03.149 ******** 2026-03-28 00:46:07.606378 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'}) 2026-03-28 00:46:07.606404 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'}) 2026-03-28 00:46:07.606449 | orchestrator | 2026-03-28 00:46:07.606460 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-28 00:46:07.606490 | orchestrator | Saturday 28 March 2026 00:46:04 +0000 (0:00:01.831) 0:01:04.981 ******** 2026-03-28 00:46:07.606502 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:07.606514 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:07.606525 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.606536 | orchestrator | 2026-03-28 00:46:07.606546 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-28 00:46:07.606557 | orchestrator | Saturday 28 March 2026 00:46:04 +0000 (0:00:00.154) 0:01:05.136 ******** 2026-03-28 00:46:07.606569 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'}) 2026-03-28 00:46:07.606580 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'}) 2026-03-28 00:46:07.606591 | orchestrator | 2026-03-28 00:46:07.606602 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-28 00:46:07.606613 | orchestrator | Saturday 28 March 2026 00:46:06 +0000 (0:00:01.316) 0:01:06.452 ******** 2026-03-28 00:46:07.606624 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:07.606635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:07.606646 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.606657 | orchestrator | 2026-03-28 00:46:07.606668 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-28 00:46:07.606679 | orchestrator | Saturday 28 March 2026 00:46:06 +0000 (0:00:00.129) 0:01:06.581 ******** 2026-03-28 00:46:07.606690 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.606700 | orchestrator | 2026-03-28 00:46:07.606711 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-28 00:46:07.606722 | orchestrator | Saturday 28 March 2026 00:46:06 +0000 (0:00:00.113) 0:01:06.695 ******** 2026-03-28 00:46:07.606733 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:07.606749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:07.606760 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.606771 | orchestrator | 2026-03-28 00:46:07.606782 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-28 00:46:07.606793 | orchestrator | Saturday 28 March 2026 00:46:06 +0000 (0:00:00.148) 0:01:06.844 ******** 2026-03-28 00:46:07.606813 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.606824 | orchestrator | 2026-03-28 00:46:07.606835 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-28 00:46:07.606846 | orchestrator | Saturday 28 March 2026 00:46:06 +0000 (0:00:00.118) 0:01:06.962 ******** 2026-03-28 00:46:07.606857 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:07.606868 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:07.606879 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.606890 | orchestrator | 2026-03-28 00:46:07.606901 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-28 00:46:07.606912 | orchestrator | Saturday 28 March 2026 00:46:06 +0000 (0:00:00.135) 0:01:07.098 ******** 2026-03-28 00:46:07.606922 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.606934 | orchestrator | 2026-03-28 00:46:07.606944 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-28 00:46:07.606955 | orchestrator | Saturday 28 March 2026 00:46:07 +0000 (0:00:00.127) 0:01:07.226 ******** 2026-03-28 00:46:07.606966 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:07.606977 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:07.606988 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:07.606999 | orchestrator | 2026-03-28 00:46:07.607010 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-28 00:46:07.607021 | orchestrator | Saturday 28 March 2026 00:46:07 +0000 (0:00:00.139) 0:01:07.365 ******** 2026-03-28 00:46:07.607032 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:46:07.607048 | orchestrator | 2026-03-28 00:46:07.607069 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-28 00:46:07.607090 | orchestrator | Saturday 28 March 2026 00:46:07 +0000 (0:00:00.266) 0:01:07.632 ******** 2026-03-28 00:46:07.607120 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:13.747148 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:13.747255 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.747272 | orchestrator | 2026-03-28 00:46:13.747285 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-28 00:46:13.747299 | orchestrator | Saturday 28 March 2026 00:46:07 +0000 (0:00:00.143) 0:01:07.775 ******** 2026-03-28 00:46:13.747312 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:13.747332 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:13.747352 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.747371 | orchestrator | 2026-03-28 00:46:13.747389 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-28 00:46:13.747435 | orchestrator | Saturday 28 March 2026 00:46:07 +0000 (0:00:00.127) 0:01:07.902 ******** 2026-03-28 00:46:13.747455 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:13.747474 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:13.747522 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.747542 | orchestrator | 2026-03-28 00:46:13.747560 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-28 00:46:13.747579 | orchestrator | Saturday 28 March 2026 00:46:07 +0000 (0:00:00.160) 0:01:08.063 ******** 2026-03-28 00:46:13.747597 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.747615 | orchestrator | 2026-03-28 00:46:13.747634 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-28 00:46:13.747653 | orchestrator | Saturday 28 March 2026 00:46:08 +0000 (0:00:00.134) 0:01:08.197 ******** 2026-03-28 00:46:13.747671 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.747690 | orchestrator | 2026-03-28 00:46:13.747709 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-28 00:46:13.747727 | orchestrator | Saturday 28 March 2026 00:46:08 +0000 (0:00:00.151) 0:01:08.349 ******** 2026-03-28 00:46:13.747745 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.747764 | orchestrator | 2026-03-28 00:46:13.747801 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-28 00:46:13.747821 | orchestrator | Saturday 28 March 2026 00:46:08 +0000 (0:00:00.133) 0:01:08.482 ******** 2026-03-28 00:46:13.747839 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:46:13.747859 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-28 00:46:13.747879 | orchestrator | } 2026-03-28 00:46:13.747897 | orchestrator | 2026-03-28 00:46:13.747915 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-28 00:46:13.747934 | orchestrator | Saturday 28 March 2026 00:46:08 +0000 (0:00:00.134) 0:01:08.617 ******** 2026-03-28 00:46:13.747953 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:46:13.747972 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-28 00:46:13.747990 | orchestrator | } 2026-03-28 00:46:13.748010 | orchestrator | 2026-03-28 00:46:13.748027 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-28 00:46:13.748045 | orchestrator | Saturday 28 March 2026 00:46:08 +0000 (0:00:00.141) 0:01:08.759 ******** 2026-03-28 00:46:13.748064 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:46:13.748082 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-28 00:46:13.748099 | orchestrator | } 2026-03-28 00:46:13.748119 | orchestrator | 2026-03-28 00:46:13.748137 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-28 00:46:13.748155 | orchestrator | Saturday 28 March 2026 00:46:08 +0000 (0:00:00.161) 0:01:08.920 ******** 2026-03-28 00:46:13.748174 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:46:13.748192 | orchestrator | 2026-03-28 00:46:13.748211 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-28 00:46:13.748230 | orchestrator | Saturday 28 March 2026 00:46:09 +0000 (0:00:00.516) 0:01:09.437 ******** 2026-03-28 00:46:13.748243 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:46:13.748254 | orchestrator | 2026-03-28 00:46:13.748265 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-28 00:46:13.748276 | orchestrator | Saturday 28 March 2026 00:46:09 +0000 (0:00:00.517) 0:01:09.954 ******** 2026-03-28 00:46:13.748287 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:46:13.748297 | orchestrator | 2026-03-28 00:46:13.748308 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-28 00:46:13.748319 | orchestrator | Saturday 28 March 2026 00:46:10 +0000 (0:00:00.745) 0:01:10.700 ******** 2026-03-28 00:46:13.748330 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:46:13.748340 | orchestrator | 2026-03-28 00:46:13.748351 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-28 00:46:13.748362 | orchestrator | Saturday 28 March 2026 00:46:10 +0000 (0:00:00.168) 0:01:10.869 ******** 2026-03-28 00:46:13.748373 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.748383 | orchestrator | 2026-03-28 00:46:13.748394 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-28 00:46:13.748457 | orchestrator | Saturday 28 March 2026 00:46:10 +0000 (0:00:00.120) 0:01:10.989 ******** 2026-03-28 00:46:13.748470 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.748481 | orchestrator | 2026-03-28 00:46:13.748492 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-28 00:46:13.748503 | orchestrator | Saturday 28 March 2026 00:46:10 +0000 (0:00:00.125) 0:01:11.115 ******** 2026-03-28 00:46:13.748514 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:46:13.748525 | orchestrator |  "vgs_report": { 2026-03-28 00:46:13.748536 | orchestrator |  "vg": [] 2026-03-28 00:46:13.748568 | orchestrator |  } 2026-03-28 00:46:13.748580 | orchestrator | } 2026-03-28 00:46:13.748591 | orchestrator | 2026-03-28 00:46:13.748602 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-28 00:46:13.748613 | orchestrator | Saturday 28 March 2026 00:46:11 +0000 (0:00:00.136) 0:01:11.251 ******** 2026-03-28 00:46:13.748623 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.748634 | orchestrator | 2026-03-28 00:46:13.748645 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-28 00:46:13.748655 | orchestrator | Saturday 28 March 2026 00:46:11 +0000 (0:00:00.140) 0:01:11.392 ******** 2026-03-28 00:46:13.748666 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.748677 | orchestrator | 2026-03-28 00:46:13.748687 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-28 00:46:13.748698 | orchestrator | Saturday 28 March 2026 00:46:11 +0000 (0:00:00.143) 0:01:11.536 ******** 2026-03-28 00:46:13.748709 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.748719 | orchestrator | 2026-03-28 00:46:13.748730 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-28 00:46:13.748741 | orchestrator | Saturday 28 March 2026 00:46:11 +0000 (0:00:00.146) 0:01:11.683 ******** 2026-03-28 00:46:13.748752 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.748762 | orchestrator | 2026-03-28 00:46:13.748773 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-28 00:46:13.748784 | orchestrator | Saturday 28 March 2026 00:46:11 +0000 (0:00:00.144) 0:01:11.827 ******** 2026-03-28 00:46:13.748794 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.748805 | orchestrator | 2026-03-28 00:46:13.748816 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-28 00:46:13.748827 | orchestrator | Saturday 28 March 2026 00:46:11 +0000 (0:00:00.129) 0:01:11.957 ******** 2026-03-28 00:46:13.748837 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.748848 | orchestrator | 2026-03-28 00:46:13.748858 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-28 00:46:13.748869 | orchestrator | Saturday 28 March 2026 00:46:11 +0000 (0:00:00.144) 0:01:12.101 ******** 2026-03-28 00:46:13.748880 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.748890 | orchestrator | 2026-03-28 00:46:13.748901 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-28 00:46:13.748912 | orchestrator | Saturday 28 March 2026 00:46:12 +0000 (0:00:00.129) 0:01:12.231 ******** 2026-03-28 00:46:13.748922 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.748933 | orchestrator | 2026-03-28 00:46:13.748944 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-28 00:46:13.748954 | orchestrator | Saturday 28 March 2026 00:46:12 +0000 (0:00:00.372) 0:01:12.604 ******** 2026-03-28 00:46:13.748965 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.748975 | orchestrator | 2026-03-28 00:46:13.749001 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-28 00:46:13.749018 | orchestrator | Saturday 28 March 2026 00:46:12 +0000 (0:00:00.141) 0:01:12.745 ******** 2026-03-28 00:46:13.749034 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.749050 | orchestrator | 2026-03-28 00:46:13.749066 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-28 00:46:13.749081 | orchestrator | Saturday 28 March 2026 00:46:12 +0000 (0:00:00.144) 0:01:12.890 ******** 2026-03-28 00:46:13.749108 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.749124 | orchestrator | 2026-03-28 00:46:13.749142 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-28 00:46:13.749159 | orchestrator | Saturday 28 March 2026 00:46:12 +0000 (0:00:00.145) 0:01:13.036 ******** 2026-03-28 00:46:13.749176 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.749194 | orchestrator | 2026-03-28 00:46:13.749241 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-28 00:46:13.749275 | orchestrator | Saturday 28 March 2026 00:46:13 +0000 (0:00:00.148) 0:01:13.185 ******** 2026-03-28 00:46:13.749297 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.749315 | orchestrator | 2026-03-28 00:46:13.749332 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-28 00:46:13.749343 | orchestrator | Saturday 28 March 2026 00:46:13 +0000 (0:00:00.124) 0:01:13.309 ******** 2026-03-28 00:46:13.749354 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.749364 | orchestrator | 2026-03-28 00:46:13.749375 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-28 00:46:13.749386 | orchestrator | Saturday 28 March 2026 00:46:13 +0000 (0:00:00.132) 0:01:13.441 ******** 2026-03-28 00:46:13.749397 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:13.749436 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:13.749447 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.749458 | orchestrator | 2026-03-28 00:46:13.749469 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-28 00:46:13.749480 | orchestrator | Saturday 28 March 2026 00:46:13 +0000 (0:00:00.158) 0:01:13.600 ******** 2026-03-28 00:46:13.749491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:13.749502 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:13.749512 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:13.749523 | orchestrator | 2026-03-28 00:46:13.749534 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-28 00:46:13.749545 | orchestrator | Saturday 28 March 2026 00:46:13 +0000 (0:00:00.154) 0:01:13.754 ******** 2026-03-28 00:46:13.749570 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:16.848142 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:16.848244 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:16.848261 | orchestrator | 2026-03-28 00:46:16.848273 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-28 00:46:16.848287 | orchestrator | Saturday 28 March 2026 00:46:13 +0000 (0:00:00.159) 0:01:13.914 ******** 2026-03-28 00:46:16.848299 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:16.848311 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:16.848322 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:16.848333 | orchestrator | 2026-03-28 00:46:16.848345 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-28 00:46:16.848356 | orchestrator | Saturday 28 March 2026 00:46:13 +0000 (0:00:00.155) 0:01:14.069 ******** 2026-03-28 00:46:16.848395 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:16.848470 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:16.848481 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:16.848493 | orchestrator | 2026-03-28 00:46:16.848504 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-28 00:46:16.848515 | orchestrator | Saturday 28 March 2026 00:46:14 +0000 (0:00:00.183) 0:01:14.253 ******** 2026-03-28 00:46:16.848526 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:16.848537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:16.848549 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:16.848559 | orchestrator | 2026-03-28 00:46:16.848570 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-28 00:46:16.848581 | orchestrator | Saturday 28 March 2026 00:46:14 +0000 (0:00:00.385) 0:01:14.639 ******** 2026-03-28 00:46:16.848592 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:16.848604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:16.848615 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:16.848626 | orchestrator | 2026-03-28 00:46:16.848637 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-28 00:46:16.848648 | orchestrator | Saturday 28 March 2026 00:46:14 +0000 (0:00:00.160) 0:01:14.799 ******** 2026-03-28 00:46:16.848659 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:16.848670 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:16.848683 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:16.848695 | orchestrator | 2026-03-28 00:46:16.848708 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-28 00:46:16.848720 | orchestrator | Saturday 28 March 2026 00:46:14 +0000 (0:00:00.148) 0:01:14.947 ******** 2026-03-28 00:46:16.848733 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:46:16.848747 | orchestrator | 2026-03-28 00:46:16.848759 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-28 00:46:16.848771 | orchestrator | Saturday 28 March 2026 00:46:15 +0000 (0:00:00.584) 0:01:15.532 ******** 2026-03-28 00:46:16.848784 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:46:16.848797 | orchestrator | 2026-03-28 00:46:16.848809 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-28 00:46:16.848822 | orchestrator | Saturday 28 March 2026 00:46:15 +0000 (0:00:00.518) 0:01:16.051 ******** 2026-03-28 00:46:16.848834 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:46:16.848847 | orchestrator | 2026-03-28 00:46:16.848859 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-28 00:46:16.848871 | orchestrator | Saturday 28 March 2026 00:46:16 +0000 (0:00:00.143) 0:01:16.195 ******** 2026-03-28 00:46:16.848884 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'vg_name': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'}) 2026-03-28 00:46:16.848899 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'vg_name': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'}) 2026-03-28 00:46:16.848922 | orchestrator | 2026-03-28 00:46:16.848934 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-28 00:46:16.848947 | orchestrator | Saturday 28 March 2026 00:46:16 +0000 (0:00:00.181) 0:01:16.376 ******** 2026-03-28 00:46:16.848994 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:16.849007 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:16.849019 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:16.849029 | orchestrator | 2026-03-28 00:46:16.849041 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-28 00:46:16.849052 | orchestrator | Saturday 28 March 2026 00:46:16 +0000 (0:00:00.157) 0:01:16.533 ******** 2026-03-28 00:46:16.849064 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:16.849075 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:16.849086 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:16.849097 | orchestrator | 2026-03-28 00:46:16.849108 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-28 00:46:16.849119 | orchestrator | Saturday 28 March 2026 00:46:16 +0000 (0:00:00.150) 0:01:16.684 ******** 2026-03-28 00:46:16.849130 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'})  2026-03-28 00:46:16.849141 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'})  2026-03-28 00:46:16.849152 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:16.849163 | orchestrator | 2026-03-28 00:46:16.849174 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-28 00:46:16.849185 | orchestrator | Saturday 28 March 2026 00:46:16 +0000 (0:00:00.154) 0:01:16.838 ******** 2026-03-28 00:46:16.849196 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:46:16.849207 | orchestrator |  "lvm_report": { 2026-03-28 00:46:16.849218 | orchestrator |  "lv": [ 2026-03-28 00:46:16.849229 | orchestrator |  { 2026-03-28 00:46:16.849240 | orchestrator |  "lv_name": "osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe", 2026-03-28 00:46:16.849256 | orchestrator |  "vg_name": "ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe" 2026-03-28 00:46:16.849267 | orchestrator |  }, 2026-03-28 00:46:16.849278 | orchestrator |  { 2026-03-28 00:46:16.849289 | orchestrator |  "lv_name": "osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549", 2026-03-28 00:46:16.849300 | orchestrator |  "vg_name": "ceph-8b5a6aab-ec84-598a-adc7-d040a5844549" 2026-03-28 00:46:16.849311 | orchestrator |  } 2026-03-28 00:46:16.849322 | orchestrator |  ], 2026-03-28 00:46:16.849333 | orchestrator |  "pv": [ 2026-03-28 00:46:16.849344 | orchestrator |  { 2026-03-28 00:46:16.849355 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-28 00:46:16.849365 | orchestrator |  "vg_name": "ceph-8b5a6aab-ec84-598a-adc7-d040a5844549" 2026-03-28 00:46:16.849376 | orchestrator |  }, 2026-03-28 00:46:16.849387 | orchestrator |  { 2026-03-28 00:46:16.849418 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-28 00:46:16.849429 | orchestrator |  "vg_name": "ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe" 2026-03-28 00:46:16.849440 | orchestrator |  } 2026-03-28 00:46:16.849451 | orchestrator |  ] 2026-03-28 00:46:16.849462 | orchestrator |  } 2026-03-28 00:46:16.849473 | orchestrator | } 2026-03-28 00:46:16.849492 | orchestrator | 2026-03-28 00:46:16.849503 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:46:16.849514 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-28 00:46:16.849525 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-28 00:46:16.849536 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-28 00:46:16.849547 | orchestrator | 2026-03-28 00:46:16.849558 | orchestrator | 2026-03-28 00:46:16.849569 | orchestrator | 2026-03-28 00:46:16.849580 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:46:16.849591 | orchestrator | Saturday 28 March 2026 00:46:16 +0000 (0:00:00.154) 0:01:16.993 ******** 2026-03-28 00:46:16.849602 | orchestrator | =============================================================================== 2026-03-28 00:46:16.849613 | orchestrator | Create block VGs -------------------------------------------------------- 6.07s 2026-03-28 00:46:16.849623 | orchestrator | Create block LVs -------------------------------------------------------- 4.20s 2026-03-28 00:46:16.849634 | orchestrator | Add known partitions to the list of available block devices ------------- 1.99s 2026-03-28 00:46:16.849645 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.87s 2026-03-28 00:46:16.849656 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.83s 2026-03-28 00:46:16.849666 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.73s 2026-03-28 00:46:16.849677 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.64s 2026-03-28 00:46:16.849688 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2026-03-28 00:46:16.849706 | orchestrator | Add known links to the list of available block devices ------------------ 1.47s 2026-03-28 00:46:17.274465 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2026-03-28 00:46:17.274572 | orchestrator | Print LVM report data --------------------------------------------------- 0.97s 2026-03-28 00:46:17.274587 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2026-03-28 00:46:17.274599 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2026-03-28 00:46:17.274610 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-03-28 00:46:17.274621 | orchestrator | Get initial list of available block devices ----------------------------- 0.82s 2026-03-28 00:46:17.274632 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2026-03-28 00:46:17.274643 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.80s 2026-03-28 00:46:17.274653 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-03-28 00:46:17.274664 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.72s 2026-03-28 00:46:17.274675 | orchestrator | Print number of OSDs wanted per DB+WAL VG ------------------------------- 0.72s 2026-03-28 00:46:29.772377 | orchestrator | 2026-03-28 00:46:29 | INFO  | Task 7bb1214f-96ec-4ac5-9edb-a0d0c7f75e87 (facts) was prepared for execution. 2026-03-28 00:46:29.772511 | orchestrator | 2026-03-28 00:46:29 | INFO  | It takes a moment until task 7bb1214f-96ec-4ac5-9edb-a0d0c7f75e87 (facts) has been started and output is visible here. 2026-03-28 00:46:42.213615 | orchestrator | 2026-03-28 00:46:42.213728 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-28 00:46:42.213744 | orchestrator | 2026-03-28 00:46:42.213756 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 00:46:42.213768 | orchestrator | Saturday 28 March 2026 00:46:34 +0000 (0:00:00.257) 0:00:00.257 ******** 2026-03-28 00:46:42.213807 | orchestrator | ok: [testbed-manager] 2026-03-28 00:46:42.213820 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:46:42.213831 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:46:42.213842 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:46:42.213853 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:46:42.213864 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:46:42.213875 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:46:42.213885 | orchestrator | 2026-03-28 00:46:42.213902 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 00:46:42.213938 | orchestrator | Saturday 28 March 2026 00:46:35 +0000 (0:00:01.102) 0:00:01.359 ******** 2026-03-28 00:46:42.213959 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:46:42.213977 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:46:42.213995 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:46:42.214010 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:46:42.214101 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:46:42.214119 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:46:42.214130 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:42.214141 | orchestrator | 2026-03-28 00:46:42.214152 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:46:42.214165 | orchestrator | 2026-03-28 00:46:42.214177 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:46:42.214191 | orchestrator | Saturday 28 March 2026 00:46:36 +0000 (0:00:01.254) 0:00:02.614 ******** 2026-03-28 00:46:42.214203 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:46:42.214215 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:46:42.214227 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:46:42.214239 | orchestrator | ok: [testbed-manager] 2026-03-28 00:46:42.214251 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:46:42.214264 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:46:42.214276 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:46:42.214288 | orchestrator | 2026-03-28 00:46:42.214300 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 00:46:42.214312 | orchestrator | 2026-03-28 00:46:42.214325 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 00:46:42.214338 | orchestrator | Saturday 28 March 2026 00:46:41 +0000 (0:00:04.882) 0:00:07.496 ******** 2026-03-28 00:46:42.214350 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:46:42.214362 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:46:42.214395 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:46:42.214408 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:46:42.214420 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:46:42.214432 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:46:42.214445 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:46:42.214457 | orchestrator | 2026-03-28 00:46:42.214470 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:46:42.214482 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:46:42.214496 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:46:42.214508 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:46:42.214519 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:46:42.214531 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:46:42.214541 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:46:42.214552 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:46:42.214575 | orchestrator | 2026-03-28 00:46:42.214586 | orchestrator | 2026-03-28 00:46:42.214597 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:46:42.214608 | orchestrator | Saturday 28 March 2026 00:46:41 +0000 (0:00:00.554) 0:00:08.051 ******** 2026-03-28 00:46:42.214619 | orchestrator | =============================================================================== 2026-03-28 00:46:42.214630 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.88s 2026-03-28 00:46:42.214640 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2026-03-28 00:46:42.214651 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2026-03-28 00:46:42.214662 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-03-28 00:46:54.659914 | orchestrator | 2026-03-28 00:46:54 | INFO  | Task 9e3b4f94-b65b-4f7c-bf97-80a9926bfbdb (frr) was prepared for execution. 2026-03-28 00:46:54.660020 | orchestrator | 2026-03-28 00:46:54 | INFO  | It takes a moment until task 9e3b4f94-b65b-4f7c-bf97-80a9926bfbdb (frr) has been started and output is visible here. 2026-03-28 00:47:21.760740 | orchestrator | 2026-03-28 00:47:21.760854 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-28 00:47:21.760871 | orchestrator | 2026-03-28 00:47:21.760883 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-28 00:47:21.760894 | orchestrator | Saturday 28 March 2026 00:46:59 +0000 (0:00:00.246) 0:00:00.246 ******** 2026-03-28 00:47:21.760906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:47:21.760924 | orchestrator | 2026-03-28 00:47:21.760945 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-28 00:47:21.760964 | orchestrator | Saturday 28 March 2026 00:46:59 +0000 (0:00:00.236) 0:00:00.483 ******** 2026-03-28 00:47:21.760983 | orchestrator | changed: [testbed-manager] 2026-03-28 00:47:21.761003 | orchestrator | 2026-03-28 00:47:21.761020 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-28 00:47:21.761038 | orchestrator | Saturday 28 March 2026 00:47:00 +0000 (0:00:01.288) 0:00:01.771 ******** 2026-03-28 00:47:21.761079 | orchestrator | changed: [testbed-manager] 2026-03-28 00:47:21.761100 | orchestrator | 2026-03-28 00:47:21.761118 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-28 00:47:21.761138 | orchestrator | Saturday 28 March 2026 00:47:11 +0000 (0:00:10.593) 0:00:12.365 ******** 2026-03-28 00:47:21.761177 | orchestrator | ok: [testbed-manager] 2026-03-28 00:47:21.761196 | orchestrator | 2026-03-28 00:47:21.761216 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-28 00:47:21.761234 | orchestrator | Saturday 28 March 2026 00:47:12 +0000 (0:00:01.020) 0:00:13.386 ******** 2026-03-28 00:47:21.761252 | orchestrator | changed: [testbed-manager] 2026-03-28 00:47:21.761272 | orchestrator | 2026-03-28 00:47:21.761292 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-28 00:47:21.761311 | orchestrator | Saturday 28 March 2026 00:47:13 +0000 (0:00:00.963) 0:00:14.349 ******** 2026-03-28 00:47:21.761348 | orchestrator | ok: [testbed-manager] 2026-03-28 00:47:21.761361 | orchestrator | 2026-03-28 00:47:21.761375 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-28 00:47:21.761388 | orchestrator | Saturday 28 March 2026 00:47:14 +0000 (0:00:01.242) 0:00:15.591 ******** 2026-03-28 00:47:21.761409 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:47:21.761430 | orchestrator | 2026-03-28 00:47:21.761452 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-28 00:47:21.761472 | orchestrator | Saturday 28 March 2026 00:47:14 +0000 (0:00:00.155) 0:00:15.747 ******** 2026-03-28 00:47:21.761491 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:47:21.761550 | orchestrator | 2026-03-28 00:47:21.761571 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-28 00:47:21.761590 | orchestrator | Saturday 28 March 2026 00:47:14 +0000 (0:00:00.157) 0:00:15.904 ******** 2026-03-28 00:47:21.761601 | orchestrator | changed: [testbed-manager] 2026-03-28 00:47:21.761612 | orchestrator | 2026-03-28 00:47:21.761623 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-28 00:47:21.761634 | orchestrator | Saturday 28 March 2026 00:47:15 +0000 (0:00:01.072) 0:00:16.976 ******** 2026-03-28 00:47:21.761645 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-28 00:47:21.761655 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-28 00:47:21.761668 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-28 00:47:21.761679 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-28 00:47:21.761690 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-28 00:47:21.761701 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-28 00:47:21.761712 | orchestrator | 2026-03-28 00:47:21.761722 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-28 00:47:21.761733 | orchestrator | Saturday 28 March 2026 00:47:18 +0000 (0:00:02.384) 0:00:19.361 ******** 2026-03-28 00:47:21.761744 | orchestrator | ok: [testbed-manager] 2026-03-28 00:47:21.761755 | orchestrator | 2026-03-28 00:47:21.761766 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-28 00:47:21.761776 | orchestrator | Saturday 28 March 2026 00:47:19 +0000 (0:00:01.853) 0:00:21.215 ******** 2026-03-28 00:47:21.761787 | orchestrator | changed: [testbed-manager] 2026-03-28 00:47:21.761798 | orchestrator | 2026-03-28 00:47:21.761808 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:47:21.761820 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:47:21.761831 | orchestrator | 2026-03-28 00:47:21.761842 | orchestrator | 2026-03-28 00:47:21.761853 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:47:21.761863 | orchestrator | Saturday 28 March 2026 00:47:21 +0000 (0:00:01.427) 0:00:22.643 ******** 2026-03-28 00:47:21.761874 | orchestrator | =============================================================================== 2026-03-28 00:47:21.761885 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.59s 2026-03-28 00:47:21.761896 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.38s 2026-03-28 00:47:21.761906 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.85s 2026-03-28 00:47:21.761917 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.43s 2026-03-28 00:47:21.761928 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.29s 2026-03-28 00:47:21.761960 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.24s 2026-03-28 00:47:21.761971 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.07s 2026-03-28 00:47:21.761982 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.02s 2026-03-28 00:47:21.761993 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.96s 2026-03-28 00:47:21.762003 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.24s 2026-03-28 00:47:21.762077 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-03-28 00:47:21.762091 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-03-28 00:47:22.080187 | orchestrator | 2026-03-28 00:47:22.081926 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Mar 28 00:47:22 UTC 2026 2026-03-28 00:47:22.081984 | orchestrator | 2026-03-28 00:47:24.039295 | orchestrator | 2026-03-28 00:47:24 | INFO  | Collection nutshell is prepared for execution 2026-03-28 00:47:24.039499 | orchestrator | 2026-03-28 00:47:24 | INFO  | A [0] - dotfiles 2026-03-28 00:47:34.104423 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [0] - homer 2026-03-28 00:47:34.104501 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [0] - netdata 2026-03-28 00:47:34.104510 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [0] - openstackclient 2026-03-28 00:47:34.104524 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [0] - phpmyadmin 2026-03-28 00:47:34.104529 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [0] - common 2026-03-28 00:47:34.109991 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [1] -- loadbalancer 2026-03-28 00:47:34.110243 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [2] --- opensearch 2026-03-28 00:47:34.110259 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [2] --- mariadb-ng 2026-03-28 00:47:34.110268 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [3] ---- horizon 2026-03-28 00:47:34.110288 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [3] ---- keystone 2026-03-28 00:47:34.110310 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [4] ----- neutron 2026-03-28 00:47:34.110496 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [5] ------ wait-for-nova 2026-03-28 00:47:34.110661 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [6] ------- octavia 2026-03-28 00:47:34.112944 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [4] ----- barbican 2026-03-28 00:47:34.112990 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [4] ----- designate 2026-03-28 00:47:34.112995 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [4] ----- ironic 2026-03-28 00:47:34.113000 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [4] ----- placement 2026-03-28 00:47:34.113051 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [4] ----- magnum 2026-03-28 00:47:34.114208 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [1] -- openvswitch 2026-03-28 00:47:34.114266 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [2] --- ovn 2026-03-28 00:47:34.115008 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [1] -- memcached 2026-03-28 00:47:34.115056 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [1] -- redis 2026-03-28 00:47:34.115107 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [1] -- rabbitmq-ng 2026-03-28 00:47:34.115516 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [0] - kubernetes 2026-03-28 00:47:34.119086 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [1] -- kubeconfig 2026-03-28 00:47:34.119140 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [1] -- copy-kubeconfig 2026-03-28 00:47:34.119146 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [0] - ceph 2026-03-28 00:47:34.122802 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [1] -- ceph-pools 2026-03-28 00:47:34.122858 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [2] --- copy-ceph-keys 2026-03-28 00:47:34.122864 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [3] ---- cephclient 2026-03-28 00:47:34.122869 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-28 00:47:34.122874 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [4] ----- wait-for-keystone 2026-03-28 00:47:34.122878 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-28 00:47:34.122882 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [5] ------ glance 2026-03-28 00:47:34.122886 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [5] ------ cinder 2026-03-28 00:47:34.122957 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [5] ------ nova 2026-03-28 00:47:34.123332 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [4] ----- prometheus 2026-03-28 00:47:34.123495 | orchestrator | 2026-03-28 00:47:34 | INFO  | A [5] ------ grafana 2026-03-28 00:47:34.346623 | orchestrator | 2026-03-28 00:47:34 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-28 00:47:34.346716 | orchestrator | 2026-03-28 00:47:34 | INFO  | Tasks are running in the background 2026-03-28 00:47:37.620452 | orchestrator | 2026-03-28 00:47:37 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-28 00:47:39.789004 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:47:39.789556 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:47:39.790402 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:47:39.791285 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:47:39.791995 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:47:39.792611 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:47:39.795101 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task 3eae379f-0ded-4cc7-824e-17ebda99d8ef is in state STARTED 2026-03-28 00:47:39.795143 | orchestrator | 2026-03-28 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:42.847359 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:47:42.847753 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:47:42.849027 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:47:42.849480 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:47:42.853082 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:47:42.853362 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:47:42.854237 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task 3eae379f-0ded-4cc7-824e-17ebda99d8ef is in state STARTED 2026-03-28 00:47:42.855745 | orchestrator | 2026-03-28 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:45.894916 | orchestrator | 2026-03-28 00:47:45 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:47:45.895398 | orchestrator | 2026-03-28 00:47:45 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:47:45.895658 | orchestrator | 2026-03-28 00:47:45 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:47:45.896343 | orchestrator | 2026-03-28 00:47:45 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:47:45.896867 | orchestrator | 2026-03-28 00:47:45 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:47:45.897490 | orchestrator | 2026-03-28 00:47:45 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:47:45.897947 | orchestrator | 2026-03-28 00:47:45 | INFO  | Task 3eae379f-0ded-4cc7-824e-17ebda99d8ef is in state STARTED 2026-03-28 00:47:45.897989 | orchestrator | 2026-03-28 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:48.955953 | orchestrator | 2026-03-28 00:47:48 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:47:48.956058 | orchestrator | 2026-03-28 00:47:48 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:47:48.956073 | orchestrator | 2026-03-28 00:47:48 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:47:48.956085 | orchestrator | 2026-03-28 00:47:48 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:47:48.956096 | orchestrator | 2026-03-28 00:47:48 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:47:48.956106 | orchestrator | 2026-03-28 00:47:48 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:47:48.956117 | orchestrator | 2026-03-28 00:47:48 | INFO  | Task 3eae379f-0ded-4cc7-824e-17ebda99d8ef is in state STARTED 2026-03-28 00:47:48.956128 | orchestrator | 2026-03-28 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:52.119121 | orchestrator | 2026-03-28 00:47:52 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:47:52.119209 | orchestrator | 2026-03-28 00:47:52 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:47:52.119224 | orchestrator | 2026-03-28 00:47:52 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:47:52.119236 | orchestrator | 2026-03-28 00:47:52 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:47:52.119247 | orchestrator | 2026-03-28 00:47:52 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:47:52.119258 | orchestrator | 2026-03-28 00:47:52 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:47:52.119269 | orchestrator | 2026-03-28 00:47:52 | INFO  | Task 3eae379f-0ded-4cc7-824e-17ebda99d8ef is in state STARTED 2026-03-28 00:47:52.119337 | orchestrator | 2026-03-28 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:55.101401 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:47:55.101530 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:47:55.101560 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:47:55.101579 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:47:55.101598 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:47:55.101618 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:47:55.101636 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task 3eae379f-0ded-4cc7-824e-17ebda99d8ef is in state STARTED 2026-03-28 00:47:55.101656 | orchestrator | 2026-03-28 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:58.308705 | orchestrator | 2026-03-28 00:47:58 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:47:58.308797 | orchestrator | 2026-03-28 00:47:58 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:47:58.319974 | orchestrator | 2026-03-28 00:47:58 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:47:58.320085 | orchestrator | 2026-03-28 00:47:58 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:47:58.320100 | orchestrator | 2026-03-28 00:47:58 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:47:58.320604 | orchestrator | 2026-03-28 00:47:58 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:47:58.324986 | orchestrator | 2026-03-28 00:47:58 | INFO  | Task 3eae379f-0ded-4cc7-824e-17ebda99d8ef is in state STARTED 2026-03-28 00:47:58.325043 | orchestrator | 2026-03-28 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:01.502878 | orchestrator | 2026-03-28 00:48:01.502976 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-28 00:48:01.502992 | orchestrator | 2026-03-28 00:48:01.503005 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-28 00:48:01.503017 | orchestrator | Saturday 28 March 2026 00:47:49 +0000 (0:00:01.127) 0:00:01.127 ******** 2026-03-28 00:48:01.503028 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:01.503040 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:01.503051 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:01.503061 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:01.503072 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:01.503083 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:01.503094 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:01.503105 | orchestrator | 2026-03-28 00:48:01.503115 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-28 00:48:01.503127 | orchestrator | Saturday 28 March 2026 00:47:52 +0000 (0:00:03.406) 0:00:04.534 ******** 2026-03-28 00:48:01.503138 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-28 00:48:01.503149 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-28 00:48:01.503159 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-28 00:48:01.503170 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-28 00:48:01.503181 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-28 00:48:01.503211 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-28 00:48:01.503222 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-28 00:48:01.503233 | orchestrator | 2026-03-28 00:48:01.503244 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-28 00:48:01.503256 | orchestrator | Saturday 28 March 2026 00:47:53 +0000 (0:00:01.189) 0:00:05.723 ******** 2026-03-28 00:48:01.503307 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:53.265536', 'end': '2026-03-28 00:47:53.273333', 'delta': '0:00:00.007797', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:48:01.503336 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:53.271629', 'end': '2026-03-28 00:47:53.280852', 'delta': '0:00:00.009223', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:48:01.503373 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:53.565306', 'end': '2026-03-28 00:47:53.588678', 'delta': '0:00:00.023372', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:48:01.503403 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:53.335542', 'end': '2026-03-28 00:47:53.340746', 'delta': '0:00:00.005204', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:48:01.503429 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:53.480155', 'end': '2026-03-28 00:47:53.489230', 'delta': '0:00:00.009075', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:48:01.503442 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:53.550484', 'end': '2026-03-28 00:47:53.560031', 'delta': '0:00:00.009547', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:48:01.503459 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:53.687436', 'end': '2026-03-28 00:47:53.693943', 'delta': '0:00:00.006507', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:48:01.503493 | orchestrator | 2026-03-28 00:48:01.503504 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-28 00:48:01.503516 | orchestrator | Saturday 28 March 2026 00:47:55 +0000 (0:00:01.833) 0:00:07.557 ******** 2026-03-28 00:48:01.503527 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-28 00:48:01.503538 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-28 00:48:01.503549 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-28 00:48:01.503560 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-28 00:48:01.503583 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-28 00:48:01.503605 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-28 00:48:01.503616 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-28 00:48:01.503627 | orchestrator | 2026-03-28 00:48:01.503638 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-28 00:48:01.503649 | orchestrator | Saturday 28 March 2026 00:47:57 +0000 (0:00:01.901) 0:00:09.458 ******** 2026-03-28 00:48:01.503660 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-28 00:48:01.503671 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-28 00:48:01.503682 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-28 00:48:01.503693 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-28 00:48:01.503704 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-28 00:48:01.503714 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-28 00:48:01.503725 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-28 00:48:01.503736 | orchestrator | 2026-03-28 00:48:01.503747 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:48:01.503766 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:01.503779 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:01.503790 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:01.503801 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:01.503812 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:01.503823 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:01.503834 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:01.503844 | orchestrator | 2026-03-28 00:48:01.503855 | orchestrator | 2026-03-28 00:48:01.503866 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:48:01.503877 | orchestrator | Saturday 28 March 2026 00:48:00 +0000 (0:00:02.472) 0:00:11.931 ******** 2026-03-28 00:48:01.503888 | orchestrator | =============================================================================== 2026-03-28 00:48:01.503899 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.40s 2026-03-28 00:48:01.503917 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.47s 2026-03-28 00:48:01.503928 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.90s 2026-03-28 00:48:01.503939 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.83s 2026-03-28 00:48:01.503950 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.19s 2026-03-28 00:48:01.503961 | orchestrator | 2026-03-28 00:48:01 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:01.503972 | orchestrator | 2026-03-28 00:48:01 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:01.503983 | orchestrator | 2026-03-28 00:48:01 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:01.503993 | orchestrator | 2026-03-28 00:48:01 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:48:01.504004 | orchestrator | 2026-03-28 00:48:01 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:01.504020 | orchestrator | 2026-03-28 00:48:01 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:48:01.504031 | orchestrator | 2026-03-28 00:48:01 | INFO  | Task 3eae379f-0ded-4cc7-824e-17ebda99d8ef is in state SUCCESS 2026-03-28 00:48:01.504043 | orchestrator | 2026-03-28 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:04.536635 | orchestrator | 2026-03-28 00:48:04 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:04.537787 | orchestrator | 2026-03-28 00:48:04 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:04.539975 | orchestrator | 2026-03-28 00:48:04 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:04.541121 | orchestrator | 2026-03-28 00:48:04 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:04.541755 | orchestrator | 2026-03-28 00:48:04 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:48:04.543020 | orchestrator | 2026-03-28 00:48:04 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:04.544839 | orchestrator | 2026-03-28 00:48:04 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:48:04.544869 | orchestrator | 2026-03-28 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:07.642703 | orchestrator | 2026-03-28 00:48:07 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:07.642761 | orchestrator | 2026-03-28 00:48:07 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:07.642770 | orchestrator | 2026-03-28 00:48:07 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:07.642778 | orchestrator | 2026-03-28 00:48:07 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:07.642785 | orchestrator | 2026-03-28 00:48:07 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:48:07.642792 | orchestrator | 2026-03-28 00:48:07 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:07.642799 | orchestrator | 2026-03-28 00:48:07 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:48:07.642807 | orchestrator | 2026-03-28 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:10.699934 | orchestrator | 2026-03-28 00:48:10 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:10.704054 | orchestrator | 2026-03-28 00:48:10 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:10.707738 | orchestrator | 2026-03-28 00:48:10 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:10.710285 | orchestrator | 2026-03-28 00:48:10 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:10.731333 | orchestrator | 2026-03-28 00:48:10 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:48:10.738753 | orchestrator | 2026-03-28 00:48:10 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:10.741302 | orchestrator | 2026-03-28 00:48:10 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:48:10.741375 | orchestrator | 2026-03-28 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:13.844196 | orchestrator | 2026-03-28 00:48:13 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:13.844370 | orchestrator | 2026-03-28 00:48:13 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:13.844393 | orchestrator | 2026-03-28 00:48:13 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:13.844408 | orchestrator | 2026-03-28 00:48:13 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:13.844421 | orchestrator | 2026-03-28 00:48:13 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:48:13.844436 | orchestrator | 2026-03-28 00:48:13 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:13.844449 | orchestrator | 2026-03-28 00:48:13 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:48:13.844463 | orchestrator | 2026-03-28 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:16.893320 | orchestrator | 2026-03-28 00:48:16 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:16.902613 | orchestrator | 2026-03-28 00:48:16 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:16.905875 | orchestrator | 2026-03-28 00:48:16 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:16.910286 | orchestrator | 2026-03-28 00:48:16 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:16.916782 | orchestrator | 2026-03-28 00:48:16 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:48:16.922561 | orchestrator | 2026-03-28 00:48:16 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:16.925333 | orchestrator | 2026-03-28 00:48:16 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:48:16.926482 | orchestrator | 2026-03-28 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:19.970609 | orchestrator | 2026-03-28 00:48:19 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:19.972702 | orchestrator | 2026-03-28 00:48:19 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:19.975396 | orchestrator | 2026-03-28 00:48:19 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:19.976137 | orchestrator | 2026-03-28 00:48:19 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:19.976977 | orchestrator | 2026-03-28 00:48:19 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:48:19.979169 | orchestrator | 2026-03-28 00:48:19 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:19.981627 | orchestrator | 2026-03-28 00:48:19 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:48:19.981674 | orchestrator | 2026-03-28 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:23.054570 | orchestrator | 2026-03-28 00:48:23 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:23.072651 | orchestrator | 2026-03-28 00:48:23 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:23.097555 | orchestrator | 2026-03-28 00:48:23 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:23.112460 | orchestrator | 2026-03-28 00:48:23 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:23.113950 | orchestrator | 2026-03-28 00:48:23 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state STARTED 2026-03-28 00:48:23.114960 | orchestrator | 2026-03-28 00:48:23 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:23.117072 | orchestrator | 2026-03-28 00:48:23 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:48:23.117336 | orchestrator | 2026-03-28 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:26.602443 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:26.606500 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:26.609637 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:26.611949 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:26.615684 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task a4cadd00-0871-4eaf-9df4-0f1d067eb456 is in state SUCCESS 2026-03-28 00:48:26.617829 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:26.621476 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:48:26.622627 | orchestrator | 2026-03-28 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:29.740585 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:29.742558 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:29.743496 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:29.744210 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:29.744821 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:29.745784 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:48:29.745902 | orchestrator | 2026-03-28 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:32.839441 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:32.846727 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:32.850811 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:32.850921 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:32.853283 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:32.858674 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:48:32.858746 | orchestrator | 2026-03-28 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:35.933717 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:35.936186 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:35.937171 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:35.942320 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:35.944194 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:35.946604 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:48:35.946641 | orchestrator | 2026-03-28 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:39.030846 | orchestrator | 2026-03-28 00:48:39 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:39.033358 | orchestrator | 2026-03-28 00:48:39 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:39.033417 | orchestrator | 2026-03-28 00:48:39 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:39.034336 | orchestrator | 2026-03-28 00:48:39 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:39.035338 | orchestrator | 2026-03-28 00:48:39 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:39.037768 | orchestrator | 2026-03-28 00:48:39 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state STARTED 2026-03-28 00:48:39.037815 | orchestrator | 2026-03-28 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:42.101600 | orchestrator | 2026-03-28 00:48:42 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:42.102069 | orchestrator | 2026-03-28 00:48:42 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:42.102804 | orchestrator | 2026-03-28 00:48:42 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:42.104253 | orchestrator | 2026-03-28 00:48:42 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:42.113001 | orchestrator | 2026-03-28 00:48:42 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:42.114316 | orchestrator | 2026-03-28 00:48:42 | INFO  | Task 8ccd1ed1-ddaa-4068-ba22-e747ffc1393c is in state SUCCESS 2026-03-28 00:48:42.114360 | orchestrator | 2026-03-28 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:45.163950 | orchestrator | 2026-03-28 00:48:45 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:45.164841 | orchestrator | 2026-03-28 00:48:45 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:45.166803 | orchestrator | 2026-03-28 00:48:45 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:45.167746 | orchestrator | 2026-03-28 00:48:45 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:45.168861 | orchestrator | 2026-03-28 00:48:45 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:45.168912 | orchestrator | 2026-03-28 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:48.262842 | orchestrator | 2026-03-28 00:48:48 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:48.262935 | orchestrator | 2026-03-28 00:48:48 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:48.262949 | orchestrator | 2026-03-28 00:48:48 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:48.262961 | orchestrator | 2026-03-28 00:48:48 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:48.262972 | orchestrator | 2026-03-28 00:48:48 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:48.262985 | orchestrator | 2026-03-28 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:51.425505 | orchestrator | 2026-03-28 00:48:51 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:51.426187 | orchestrator | 2026-03-28 00:48:51 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:51.431811 | orchestrator | 2026-03-28 00:48:51 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:51.434275 | orchestrator | 2026-03-28 00:48:51 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:51.440374 | orchestrator | 2026-03-28 00:48:51 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:51.440498 | orchestrator | 2026-03-28 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:54.490935 | orchestrator | 2026-03-28 00:48:54 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:54.498923 | orchestrator | 2026-03-28 00:48:54 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:54.502009 | orchestrator | 2026-03-28 00:48:54 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:54.505750 | orchestrator | 2026-03-28 00:48:54 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:54.507169 | orchestrator | 2026-03-28 00:48:54 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:54.507477 | orchestrator | 2026-03-28 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:57.610871 | orchestrator | 2026-03-28 00:48:57 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:48:57.610974 | orchestrator | 2026-03-28 00:48:57 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:48:57.610987 | orchestrator | 2026-03-28 00:48:57 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:48:57.611017 | orchestrator | 2026-03-28 00:48:57 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:48:57.611028 | orchestrator | 2026-03-28 00:48:57 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:48:57.611038 | orchestrator | 2026-03-28 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:00.727793 | orchestrator | 2026-03-28 00:49:00 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:49:00.727993 | orchestrator | 2026-03-28 00:49:00 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:49:00.732090 | orchestrator | 2026-03-28 00:49:00 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:00.732825 | orchestrator | 2026-03-28 00:49:00 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:00.743684 | orchestrator | 2026-03-28 00:49:00 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:00.743753 | orchestrator | 2026-03-28 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:03.905599 | orchestrator | 2026-03-28 00:49:03 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:49:03.906462 | orchestrator | 2026-03-28 00:49:03 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:49:03.909423 | orchestrator | 2026-03-28 00:49:03 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:03.910283 | orchestrator | 2026-03-28 00:49:03 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:03.912399 | orchestrator | 2026-03-28 00:49:03 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:03.912456 | orchestrator | 2026-03-28 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:06.987376 | orchestrator | 2026-03-28 00:49:06 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:49:06.989317 | orchestrator | 2026-03-28 00:49:06 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:49:06.991325 | orchestrator | 2026-03-28 00:49:06 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:06.992550 | orchestrator | 2026-03-28 00:49:06 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:06.994293 | orchestrator | 2026-03-28 00:49:06 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:06.994361 | orchestrator | 2026-03-28 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:10.090261 | orchestrator | 2026-03-28 00:49:10 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:49:10.091989 | orchestrator | 2026-03-28 00:49:10 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:49:10.093877 | orchestrator | 2026-03-28 00:49:10 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:10.096137 | orchestrator | 2026-03-28 00:49:10 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:10.097745 | orchestrator | 2026-03-28 00:49:10 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:10.097995 | orchestrator | 2026-03-28 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:13.196214 | orchestrator | 2026-03-28 00:49:13 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:49:13.206805 | orchestrator | 2026-03-28 00:49:13 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:49:13.212344 | orchestrator | 2026-03-28 00:49:13 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:13.215654 | orchestrator | 2026-03-28 00:49:13 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:13.217822 | orchestrator | 2026-03-28 00:49:13 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:13.217901 | orchestrator | 2026-03-28 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:16.316598 | orchestrator | 2026-03-28 00:49:16 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:49:16.317721 | orchestrator | 2026-03-28 00:49:16 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:49:16.319969 | orchestrator | 2026-03-28 00:49:16 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:16.322544 | orchestrator | 2026-03-28 00:49:16 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:16.324260 | orchestrator | 2026-03-28 00:49:16 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:16.324298 | orchestrator | 2026-03-28 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:19.426919 | orchestrator | 2026-03-28 00:49:19 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:49:19.430767 | orchestrator | 2026-03-28 00:49:19 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:49:19.436261 | orchestrator | 2026-03-28 00:49:19 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:19.440292 | orchestrator | 2026-03-28 00:49:19 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:19.445971 | orchestrator | 2026-03-28 00:49:19 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:19.446091 | orchestrator | 2026-03-28 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:22.520183 | orchestrator | 2026-03-28 00:49:22 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:49:22.521043 | orchestrator | 2026-03-28 00:49:22 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:49:22.522657 | orchestrator | 2026-03-28 00:49:22 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:22.524371 | orchestrator | 2026-03-28 00:49:22 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:22.524769 | orchestrator | 2026-03-28 00:49:22 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:22.526406 | orchestrator | 2026-03-28 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:25.576242 | orchestrator | 2026-03-28 00:49:25 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state STARTED 2026-03-28 00:49:25.579748 | orchestrator | 2026-03-28 00:49:25 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:49:25.581623 | orchestrator | 2026-03-28 00:49:25 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:25.582307 | orchestrator | 2026-03-28 00:49:25 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:25.584707 | orchestrator | 2026-03-28 00:49:25 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:25.584758 | orchestrator | 2026-03-28 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:28.654189 | orchestrator | 2026-03-28 00:49:28 | INFO  | Task f102d714-3a23-4ae4-a46e-3dd7955bf1cd is in state SUCCESS 2026-03-28 00:49:28.655106 | orchestrator | 2026-03-28 00:49:28.655247 | orchestrator | 2026-03-28 00:49:28.655263 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-28 00:49:28.655273 | orchestrator | 2026-03-28 00:49:28.655281 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-28 00:49:28.655310 | orchestrator | Saturday 28 March 2026 00:47:47 +0000 (0:00:00.494) 0:00:00.494 ******** 2026-03-28 00:49:28.655319 | orchestrator | ok: [testbed-manager] => { 2026-03-28 00:49:28.655330 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-28 00:49:28.655341 | orchestrator | } 2026-03-28 00:49:28.655350 | orchestrator | 2026-03-28 00:49:28.655359 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-28 00:49:28.655367 | orchestrator | Saturday 28 March 2026 00:47:47 +0000 (0:00:00.227) 0:00:00.722 ******** 2026-03-28 00:49:28.655376 | orchestrator | ok: [testbed-manager] 2026-03-28 00:49:28.655386 | orchestrator | 2026-03-28 00:49:28.655395 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-28 00:49:28.655403 | orchestrator | Saturday 28 March 2026 00:47:49 +0000 (0:00:01.784) 0:00:02.506 ******** 2026-03-28 00:49:28.655412 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-28 00:49:28.655420 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-28 00:49:28.655427 | orchestrator | 2026-03-28 00:49:28.655434 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-28 00:49:28.655442 | orchestrator | Saturday 28 March 2026 00:47:51 +0000 (0:00:01.836) 0:00:04.343 ******** 2026-03-28 00:49:28.655448 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.655456 | orchestrator | 2026-03-28 00:49:28.655467 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-28 00:49:28.655474 | orchestrator | Saturday 28 March 2026 00:47:54 +0000 (0:00:03.460) 0:00:07.803 ******** 2026-03-28 00:49:28.655482 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.655490 | orchestrator | 2026-03-28 00:49:28.655501 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-28 00:49:28.655510 | orchestrator | Saturday 28 March 2026 00:47:57 +0000 (0:00:02.522) 0:00:10.325 ******** 2026-03-28 00:49:28.655525 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-28 00:49:28.655533 | orchestrator | ok: [testbed-manager] 2026-03-28 00:49:28.655541 | orchestrator | 2026-03-28 00:49:28.655548 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-28 00:49:28.655556 | orchestrator | Saturday 28 March 2026 00:48:21 +0000 (0:00:24.665) 0:00:34.990 ******** 2026-03-28 00:49:28.655563 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.655571 | orchestrator | 2026-03-28 00:49:28.655579 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:49:28.655586 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:49:28.655595 | orchestrator | 2026-03-28 00:49:28.655604 | orchestrator | 2026-03-28 00:49:28.655611 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:49:28.655619 | orchestrator | Saturday 28 March 2026 00:48:23 +0000 (0:00:02.172) 0:00:37.163 ******** 2026-03-28 00:49:28.655624 | orchestrator | =============================================================================== 2026-03-28 00:49:28.655628 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.67s 2026-03-28 00:49:28.655633 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.46s 2026-03-28 00:49:28.655638 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.52s 2026-03-28 00:49:28.655642 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.17s 2026-03-28 00:49:28.655647 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.84s 2026-03-28 00:49:28.655652 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.78s 2026-03-28 00:49:28.655656 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.23s 2026-03-28 00:49:28.655662 | orchestrator | 2026-03-28 00:49:28.655669 | orchestrator | 2026-03-28 00:49:28.655686 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-28 00:49:28.655696 | orchestrator | 2026-03-28 00:49:28.655705 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-28 00:49:28.655712 | orchestrator | Saturday 28 March 2026 00:47:50 +0000 (0:00:00.957) 0:00:00.957 ******** 2026-03-28 00:49:28.655719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-28 00:49:28.655727 | orchestrator | 2026-03-28 00:49:28.655734 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-28 00:49:28.655742 | orchestrator | Saturday 28 March 2026 00:47:51 +0000 (0:00:00.995) 0:00:01.953 ******** 2026-03-28 00:49:28.655750 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-28 00:49:28.655757 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-28 00:49:28.655764 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-28 00:49:28.655771 | orchestrator | 2026-03-28 00:49:28.655777 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-28 00:49:28.655785 | orchestrator | Saturday 28 March 2026 00:47:55 +0000 (0:00:04.031) 0:00:05.984 ******** 2026-03-28 00:49:28.655792 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.655798 | orchestrator | 2026-03-28 00:49:28.655806 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-28 00:49:28.655813 | orchestrator | Saturday 28 March 2026 00:47:58 +0000 (0:00:02.947) 0:00:08.931 ******** 2026-03-28 00:49:28.655836 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-28 00:49:28.655845 | orchestrator | ok: [testbed-manager] 2026-03-28 00:49:28.655853 | orchestrator | 2026-03-28 00:49:28.655860 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-28 00:49:28.655867 | orchestrator | Saturday 28 March 2026 00:48:32 +0000 (0:00:34.225) 0:00:43.157 ******** 2026-03-28 00:49:28.655874 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.655882 | orchestrator | 2026-03-28 00:49:28.655890 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-28 00:49:28.655899 | orchestrator | Saturday 28 March 2026 00:48:34 +0000 (0:00:01.689) 0:00:44.846 ******** 2026-03-28 00:49:28.655906 | orchestrator | ok: [testbed-manager] 2026-03-28 00:49:28.655913 | orchestrator | 2026-03-28 00:49:28.655921 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-28 00:49:28.655928 | orchestrator | Saturday 28 March 2026 00:48:34 +0000 (0:00:00.843) 0:00:45.690 ******** 2026-03-28 00:49:28.655935 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.655942 | orchestrator | 2026-03-28 00:49:28.655949 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-28 00:49:28.655956 | orchestrator | Saturday 28 March 2026 00:48:37 +0000 (0:00:03.062) 0:00:48.752 ******** 2026-03-28 00:49:28.655963 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.655970 | orchestrator | 2026-03-28 00:49:28.655978 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-28 00:49:28.655986 | orchestrator | Saturday 28 March 2026 00:48:38 +0000 (0:00:00.991) 0:00:49.744 ******** 2026-03-28 00:49:28.655993 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.656001 | orchestrator | 2026-03-28 00:49:28.656009 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-28 00:49:28.656017 | orchestrator | Saturday 28 March 2026 00:48:40 +0000 (0:00:01.251) 0:00:50.995 ******** 2026-03-28 00:49:28.656023 | orchestrator | ok: [testbed-manager] 2026-03-28 00:49:28.656028 | orchestrator | 2026-03-28 00:49:28.656034 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:49:28.656039 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:49:28.656051 | orchestrator | 2026-03-28 00:49:28.656057 | orchestrator | 2026-03-28 00:49:28.656063 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:49:28.656069 | orchestrator | Saturday 28 March 2026 00:48:40 +0000 (0:00:00.776) 0:00:51.772 ******** 2026-03-28 00:49:28.656074 | orchestrator | =============================================================================== 2026-03-28 00:49:28.656079 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.23s 2026-03-28 00:49:28.656085 | orchestrator | osism.services.openstackclient : Create required directories ------------ 4.03s 2026-03-28 00:49:28.656090 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.06s 2026-03-28 00:49:28.656094 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.94s 2026-03-28 00:49:28.656099 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.69s 2026-03-28 00:49:28.656104 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.25s 2026-03-28 00:49:28.656110 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.00s 2026-03-28 00:49:28.656117 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.99s 2026-03-28 00:49:28.656128 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.84s 2026-03-28 00:49:28.656136 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.78s 2026-03-28 00:49:28.656145 | orchestrator | 2026-03-28 00:49:28.656176 | orchestrator | 2026-03-28 00:49:28.656183 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:49:28.656191 | orchestrator | 2026-03-28 00:49:28.656198 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:49:28.656204 | orchestrator | Saturday 28 March 2026 00:47:48 +0000 (0:00:00.218) 0:00:00.218 ******** 2026-03-28 00:49:28.656211 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-28 00:49:28.656218 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-28 00:49:28.656224 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-28 00:49:28.656232 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-28 00:49:28.656238 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-28 00:49:28.656246 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-28 00:49:28.656252 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-28 00:49:28.656260 | orchestrator | 2026-03-28 00:49:28.656267 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-28 00:49:28.656274 | orchestrator | 2026-03-28 00:49:28.656281 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-28 00:49:28.656289 | orchestrator | Saturday 28 March 2026 00:47:49 +0000 (0:00:01.092) 0:00:01.314 ******** 2026-03-28 00:49:28.656310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:49:28.656320 | orchestrator | 2026-03-28 00:49:28.656325 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-28 00:49:28.656353 | orchestrator | Saturday 28 March 2026 00:47:51 +0000 (0:00:02.265) 0:00:03.579 ******** 2026-03-28 00:49:28.656358 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:49:28.656363 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:49:28.656368 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:49:28.656373 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:49:28.656377 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:49:28.656390 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:49:28.656394 | orchestrator | ok: [testbed-manager] 2026-03-28 00:49:28.656399 | orchestrator | 2026-03-28 00:49:28.656404 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-28 00:49:28.656409 | orchestrator | Saturday 28 March 2026 00:47:53 +0000 (0:00:02.374) 0:00:05.954 ******** 2026-03-28 00:49:28.656419 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:49:28.656423 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:49:28.656428 | orchestrator | ok: [testbed-manager] 2026-03-28 00:49:28.656433 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:49:28.656437 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:49:28.656442 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:49:28.656447 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:49:28.656451 | orchestrator | 2026-03-28 00:49:28.656456 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-28 00:49:28.656460 | orchestrator | Saturday 28 March 2026 00:47:56 +0000 (0:00:02.817) 0:00:08.772 ******** 2026-03-28 00:49:28.656465 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.656469 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:28.656474 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:28.656478 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:28.656483 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:28.656487 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:28.656492 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:28.656496 | orchestrator | 2026-03-28 00:49:28.656501 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-28 00:49:28.656505 | orchestrator | Saturday 28 March 2026 00:47:59 +0000 (0:00:02.806) 0:00:11.579 ******** 2026-03-28 00:49:28.656510 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:28.656514 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:28.656519 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:28.656523 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.656527 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:28.656532 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:28.656536 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:28.656541 | orchestrator | 2026-03-28 00:49:28.656545 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-28 00:49:28.656550 | orchestrator | Saturday 28 March 2026 00:48:09 +0000 (0:00:09.998) 0:00:21.578 ******** 2026-03-28 00:49:28.656555 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:28.656561 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:28.656566 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:28.656571 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:28.656575 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:28.656579 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:28.656584 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.656588 | orchestrator | 2026-03-28 00:49:28.656593 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-28 00:49:28.656598 | orchestrator | Saturday 28 March 2026 00:48:51 +0000 (0:00:41.923) 0:01:03.501 ******** 2026-03-28 00:49:28.656603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:49:28.656609 | orchestrator | 2026-03-28 00:49:28.656614 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-28 00:49:28.656618 | orchestrator | Saturday 28 March 2026 00:48:53 +0000 (0:00:01.715) 0:01:05.217 ******** 2026-03-28 00:49:28.656623 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-28 00:49:28.656628 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-28 00:49:28.656632 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-28 00:49:28.656637 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-28 00:49:28.656641 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-28 00:49:28.656646 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-28 00:49:28.656650 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-28 00:49:28.656655 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-28 00:49:28.656663 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-28 00:49:28.656667 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-28 00:49:28.656672 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-28 00:49:28.656676 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-28 00:49:28.656681 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-28 00:49:28.656685 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-28 00:49:28.656690 | orchestrator | 2026-03-28 00:49:28.656694 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-28 00:49:28.656700 | orchestrator | Saturday 28 March 2026 00:49:00 +0000 (0:00:06.857) 0:01:12.074 ******** 2026-03-28 00:49:28.656704 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:49:28.656709 | orchestrator | ok: [testbed-manager] 2026-03-28 00:49:28.656713 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:49:28.656718 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:49:28.656722 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:49:28.656727 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:49:28.656731 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:49:28.656736 | orchestrator | 2026-03-28 00:49:28.656740 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-28 00:49:28.656745 | orchestrator | Saturday 28 March 2026 00:49:01 +0000 (0:00:01.364) 0:01:13.439 ******** 2026-03-28 00:49:28.656749 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:28.656754 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:28.656758 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.656762 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:28.656767 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:28.656771 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:28.656776 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:28.656780 | orchestrator | 2026-03-28 00:49:28.656784 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-28 00:49:28.656792 | orchestrator | Saturday 28 March 2026 00:49:03 +0000 (0:00:02.222) 0:01:15.661 ******** 2026-03-28 00:49:28.656796 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:49:28.656801 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:49:28.656806 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:49:28.656810 | orchestrator | ok: [testbed-manager] 2026-03-28 00:49:28.656814 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:49:28.656819 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:49:28.656823 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:49:28.656828 | orchestrator | 2026-03-28 00:49:28.656832 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-28 00:49:28.656837 | orchestrator | Saturday 28 March 2026 00:49:06 +0000 (0:00:02.390) 0:01:18.052 ******** 2026-03-28 00:49:28.656842 | orchestrator | ok: [testbed-manager] 2026-03-28 00:49:28.656846 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:49:28.656850 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:49:28.656855 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:49:28.656859 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:49:28.656864 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:49:28.656868 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:49:28.656873 | orchestrator | 2026-03-28 00:49:28.656877 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-28 00:49:28.656882 | orchestrator | Saturday 28 March 2026 00:49:08 +0000 (0:00:02.808) 0:01:20.861 ******** 2026-03-28 00:49:28.656886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-28 00:49:28.656892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:49:28.656897 | orchestrator | 2026-03-28 00:49:28.656901 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-28 00:49:28.656909 | orchestrator | Saturday 28 March 2026 00:49:10 +0000 (0:00:02.017) 0:01:22.879 ******** 2026-03-28 00:49:28.656914 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.656918 | orchestrator | 2026-03-28 00:49:28.656923 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-28 00:49:28.656934 | orchestrator | Saturday 28 March 2026 00:49:14 +0000 (0:00:03.686) 0:01:26.566 ******** 2026-03-28 00:49:28.656939 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:28.656943 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:28.656948 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:28.656952 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:28.656956 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:28.656961 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:28.656969 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:28.656976 | orchestrator | 2026-03-28 00:49:28.656982 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:49:28.656993 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:49:28.657003 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:49:28.657010 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:49:28.657017 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:49:28.657023 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:49:28.657030 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:49:28.657037 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:49:28.657043 | orchestrator | 2026-03-28 00:49:28.657052 | orchestrator | 2026-03-28 00:49:28.657060 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:49:28.657113 | orchestrator | Saturday 28 March 2026 00:49:26 +0000 (0:00:11.744) 0:01:38.310 ******** 2026-03-28 00:49:28.657123 | orchestrator | =============================================================================== 2026-03-28 00:49:28.657130 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.92s 2026-03-28 00:49:28.657137 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.74s 2026-03-28 00:49:28.657144 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.00s 2026-03-28 00:49:28.657172 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.86s 2026-03-28 00:49:28.657179 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.69s 2026-03-28 00:49:28.657186 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.82s 2026-03-28 00:49:28.657193 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.81s 2026-03-28 00:49:28.657200 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.81s 2026-03-28 00:49:28.657207 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.39s 2026-03-28 00:49:28.657214 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.37s 2026-03-28 00:49:28.657221 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.27s 2026-03-28 00:49:28.657234 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.22s 2026-03-28 00:49:28.657242 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.02s 2026-03-28 00:49:28.657257 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.72s 2026-03-28 00:49:28.657264 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.36s 2026-03-28 00:49:28.657271 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.10s 2026-03-28 00:49:28.657392 | orchestrator | 2026-03-28 00:49:28 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:49:28.657406 | orchestrator | 2026-03-28 00:49:28 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:28.660401 | orchestrator | 2026-03-28 00:49:28 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:28.661617 | orchestrator | 2026-03-28 00:49:28 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:28.661643 | orchestrator | 2026-03-28 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:31.694367 | orchestrator | 2026-03-28 00:49:31 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:49:31.696297 | orchestrator | 2026-03-28 00:49:31 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:31.697174 | orchestrator | 2026-03-28 00:49:31 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:31.699680 | orchestrator | 2026-03-28 00:49:31 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:31.699740 | orchestrator | 2026-03-28 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:34.740584 | orchestrator | 2026-03-28 00:49:34 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state STARTED 2026-03-28 00:49:34.740659 | orchestrator | 2026-03-28 00:49:34 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:34.740674 | orchestrator | 2026-03-28 00:49:34 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:34.740684 | orchestrator | 2026-03-28 00:49:34 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:34.740693 | orchestrator | 2026-03-28 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:37.789291 | orchestrator | 2026-03-28 00:49:37 | INFO  | Task e9e32e34-1aa1-42a8-a40c-905f87690a0f is in state SUCCESS 2026-03-28 00:49:37.790582 | orchestrator | 2026-03-28 00:49:37 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:37.792704 | orchestrator | 2026-03-28 00:49:37 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:37.795545 | orchestrator | 2026-03-28 00:49:37 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:37.795630 | orchestrator | 2026-03-28 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:40.865688 | orchestrator | 2026-03-28 00:49:40 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:40.868999 | orchestrator | 2026-03-28 00:49:40 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:40.873562 | orchestrator | 2026-03-28 00:49:40 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:40.873622 | orchestrator | 2026-03-28 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:43.975691 | orchestrator | 2026-03-28 00:49:43 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:43.981451 | orchestrator | 2026-03-28 00:49:43 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:43.984891 | orchestrator | 2026-03-28 00:49:43 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:43.984979 | orchestrator | 2026-03-28 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:47.043992 | orchestrator | 2026-03-28 00:49:47 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:47.045282 | orchestrator | 2026-03-28 00:49:47 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:47.046356 | orchestrator | 2026-03-28 00:49:47 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:47.046391 | orchestrator | 2026-03-28 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:50.094884 | orchestrator | 2026-03-28 00:49:50 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:50.097885 | orchestrator | 2026-03-28 00:49:50 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:50.104045 | orchestrator | 2026-03-28 00:49:50 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:50.104114 | orchestrator | 2026-03-28 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:53.169650 | orchestrator | 2026-03-28 00:49:53 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:53.171720 | orchestrator | 2026-03-28 00:49:53 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:53.173680 | orchestrator | 2026-03-28 00:49:53 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:53.174103 | orchestrator | 2026-03-28 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:56.236268 | orchestrator | 2026-03-28 00:49:56 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:56.240691 | orchestrator | 2026-03-28 00:49:56 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:56.244182 | orchestrator | 2026-03-28 00:49:56 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:56.244722 | orchestrator | 2026-03-28 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:59.304730 | orchestrator | 2026-03-28 00:49:59 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:49:59.305884 | orchestrator | 2026-03-28 00:49:59 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:49:59.309304 | orchestrator | 2026-03-28 00:49:59 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:49:59.309349 | orchestrator | 2026-03-28 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:02.360187 | orchestrator | 2026-03-28 00:50:02 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:02.361663 | orchestrator | 2026-03-28 00:50:02 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:50:02.362925 | orchestrator | 2026-03-28 00:50:02 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:02.362985 | orchestrator | 2026-03-28 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:05.398973 | orchestrator | 2026-03-28 00:50:05 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:05.400699 | orchestrator | 2026-03-28 00:50:05 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:50:05.402430 | orchestrator | 2026-03-28 00:50:05 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:05.402486 | orchestrator | 2026-03-28 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:08.445354 | orchestrator | 2026-03-28 00:50:08 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:08.446370 | orchestrator | 2026-03-28 00:50:08 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:50:08.447766 | orchestrator | 2026-03-28 00:50:08 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:08.447846 | orchestrator | 2026-03-28 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:11.491659 | orchestrator | 2026-03-28 00:50:11 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:11.491788 | orchestrator | 2026-03-28 00:50:11 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:50:11.491812 | orchestrator | 2026-03-28 00:50:11 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:11.491832 | orchestrator | 2026-03-28 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:14.544979 | orchestrator | 2026-03-28 00:50:14 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:14.547564 | orchestrator | 2026-03-28 00:50:14 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:50:14.550566 | orchestrator | 2026-03-28 00:50:14 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:14.550671 | orchestrator | 2026-03-28 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:17.614601 | orchestrator | 2026-03-28 00:50:17 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:17.615715 | orchestrator | 2026-03-28 00:50:17 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:50:17.616946 | orchestrator | 2026-03-28 00:50:17 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:17.616978 | orchestrator | 2026-03-28 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:20.668922 | orchestrator | 2026-03-28 00:50:20 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:20.671020 | orchestrator | 2026-03-28 00:50:20 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:50:20.672539 | orchestrator | 2026-03-28 00:50:20 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:20.672668 | orchestrator | 2026-03-28 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:23.708847 | orchestrator | 2026-03-28 00:50:23 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:23.708951 | orchestrator | 2026-03-28 00:50:23 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:50:23.710493 | orchestrator | 2026-03-28 00:50:23 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:23.710587 | orchestrator | 2026-03-28 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:26.763457 | orchestrator | 2026-03-28 00:50:26 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:26.766755 | orchestrator | 2026-03-28 00:50:26 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:50:26.768769 | orchestrator | 2026-03-28 00:50:26 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:26.769311 | orchestrator | 2026-03-28 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:29.821478 | orchestrator | 2026-03-28 00:50:29 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:29.823043 | orchestrator | 2026-03-28 00:50:29 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:50:29.825234 | orchestrator | 2026-03-28 00:50:29 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:29.825300 | orchestrator | 2026-03-28 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:32.872644 | orchestrator | 2026-03-28 00:50:32 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:32.876151 | orchestrator | 2026-03-28 00:50:32 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state STARTED 2026-03-28 00:50:32.877726 | orchestrator | 2026-03-28 00:50:32 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:32.878189 | orchestrator | 2026-03-28 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:35.966379 | orchestrator | 2026-03-28 00:50:35.966575 | orchestrator | 2026-03-28 00:50:35.966605 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-28 00:50:35.966614 | orchestrator | 2026-03-28 00:50:35.966621 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-28 00:50:35.966628 | orchestrator | Saturday 28 March 2026 00:48:06 +0000 (0:00:00.289) 0:00:00.289 ******** 2026-03-28 00:50:35.966635 | orchestrator | ok: [testbed-manager] 2026-03-28 00:50:35.966644 | orchestrator | 2026-03-28 00:50:35.966651 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-28 00:50:35.966659 | orchestrator | Saturday 28 March 2026 00:48:08 +0000 (0:00:01.774) 0:00:02.063 ******** 2026-03-28 00:50:35.966666 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-28 00:50:35.966674 | orchestrator | 2026-03-28 00:50:35.966681 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-28 00:50:35.966687 | orchestrator | Saturday 28 March 2026 00:48:08 +0000 (0:00:00.949) 0:00:03.013 ******** 2026-03-28 00:50:35.966694 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:35.966701 | orchestrator | 2026-03-28 00:50:35.966708 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-28 00:50:35.966715 | orchestrator | Saturday 28 March 2026 00:48:12 +0000 (0:00:03.792) 0:00:06.805 ******** 2026-03-28 00:50:35.966722 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-28 00:50:35.966729 | orchestrator | ok: [testbed-manager] 2026-03-28 00:50:35.966736 | orchestrator | 2026-03-28 00:50:35.966743 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-28 00:50:35.966749 | orchestrator | Saturday 28 March 2026 00:49:22 +0000 (0:01:09.520) 0:01:16.325 ******** 2026-03-28 00:50:35.966755 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:35.966761 | orchestrator | 2026-03-28 00:50:35.966767 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:50:35.966774 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:50:35.966781 | orchestrator | 2026-03-28 00:50:35.966787 | orchestrator | 2026-03-28 00:50:35.966793 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:50:35.966800 | orchestrator | Saturday 28 March 2026 00:49:35 +0000 (0:00:12.790) 0:01:29.116 ******** 2026-03-28 00:50:35.966806 | orchestrator | =============================================================================== 2026-03-28 00:50:35.966812 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 69.52s 2026-03-28 00:50:35.966818 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 12.79s 2026-03-28 00:50:35.966825 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 3.79s 2026-03-28 00:50:35.966849 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.77s 2026-03-28 00:50:35.966857 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.95s 2026-03-28 00:50:35.966863 | orchestrator | 2026-03-28 00:50:35.966869 | orchestrator | 2026-03-28 00:50:35.966875 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-28 00:50:35.966881 | orchestrator | 2026-03-28 00:50:35.966887 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-28 00:50:35.966894 | orchestrator | Saturday 28 March 2026 00:47:39 +0000 (0:00:00.308) 0:00:00.309 ******** 2026-03-28 00:50:35.966900 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:50:35.966908 | orchestrator | 2026-03-28 00:50:35.966914 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-28 00:50:35.966920 | orchestrator | Saturday 28 March 2026 00:47:41 +0000 (0:00:01.640) 0:00:01.949 ******** 2026-03-28 00:50:35.966926 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:50:35.966933 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:50:35.966941 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:50:35.966954 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:50:35.966962 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:50:35.966968 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:50:35.966975 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:50:35.966982 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:50:35.966989 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:50:35.966997 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:50:35.967004 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:50:35.967011 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:50:35.967018 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:50:35.967025 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:50:35.967032 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:50:35.967039 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:50:35.967096 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:50:35.967104 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:50:35.967111 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:50:35.967118 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:50:35.967124 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:50:35.967131 | orchestrator | 2026-03-28 00:50:35.967138 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-28 00:50:35.967145 | orchestrator | Saturday 28 March 2026 00:47:46 +0000 (0:00:04.857) 0:00:06.806 ******** 2026-03-28 00:50:35.967152 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:50:35.967171 | orchestrator | 2026-03-28 00:50:35.967178 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-28 00:50:35.967184 | orchestrator | Saturday 28 March 2026 00:47:47 +0000 (0:00:01.504) 0:00:08.311 ******** 2026-03-28 00:50:35.967195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.967205 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.967212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.967218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.967231 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.967255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.967262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967282 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.967288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967297 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967303 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967344 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967357 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967363 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967384 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.967390 | orchestrator | 2026-03-28 00:50:35.967396 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-28 00:50:35.967406 | orchestrator | Saturday 28 March 2026 00:47:55 +0000 (0:00:07.171) 0:00:15.483 ******** 2026-03-28 00:50:35.967418 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967424 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967431 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967436 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:50:35.967443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967525 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:35.967532 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:35.967538 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:35.967556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967595 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:35.967601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967641 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:35.967653 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:35.967659 | orchestrator | 2026-03-28 00:50:35.967666 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-28 00:50:35.967672 | orchestrator | Saturday 28 March 2026 00:47:56 +0000 (0:00:01.906) 0:00:17.389 ******** 2026-03-28 00:50:35.967678 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967688 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967695 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967701 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:50:35.967707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/koll2026-03-28 00:50:35 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:50:35.967758 | orchestrator | 2026-03-28 00:50:35 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:35.967765 | orchestrator | 2026-03-28 00:50:35 | INFO  | Task b5523a10-1493-435a-a509-f3e0e2368656 is in state SUCCESS 2026-03-28 00:50:35.967771 | orchestrator | a/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967778 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:35.967784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967825 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:35.967831 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:35.967838 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:35.967852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967875 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:35.967881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:50:35.967887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.967906 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:35.967912 | orchestrator | 2026-03-28 00:50:35.967918 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-28 00:50:35.967924 | orchestrator | Saturday 28 March 2026 00:48:00 +0000 (0:00:03.816) 0:00:21.205 ******** 2026-03-28 00:50:35.967930 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:50:35.967936 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:35.967942 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:35.967948 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:35.967954 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:35.967960 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:35.967966 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:35.967973 | orchestrator | 2026-03-28 00:50:35.967979 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-28 00:50:35.967985 | orchestrator | Saturday 28 March 2026 00:48:02 +0000 (0:00:02.184) 0:00:23.390 ******** 2026-03-28 00:50:35.967991 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:50:35.967998 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:35.968004 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:35.968011 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:35.968017 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:35.968024 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:35.968030 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:35.968037 | orchestrator | 2026-03-28 00:50:35.968043 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-28 00:50:35.968052 | orchestrator | Saturday 28 March 2026 00:48:04 +0000 (0:00:01.478) 0:00:24.869 ******** 2026-03-28 00:50:35.968114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.968124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.968134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.968156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.968179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968189 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.968202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.968211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.968219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968281 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968297 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968329 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968338 | orchestrator | 2026-03-28 00:50:35.968347 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-28 00:50:35.968355 | orchestrator | Saturday 28 March 2026 00:48:13 +0000 (0:00:09.586) 0:00:34.455 ******** 2026-03-28 00:50:35.968364 | orchestrator | [WARNING]: Skipped 2026-03-28 00:50:35.968373 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-28 00:50:35.968381 | orchestrator | to this access issue: 2026-03-28 00:50:35.968389 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-28 00:50:35.968397 | orchestrator | directory 2026-03-28 00:50:35.968406 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:50:35.968414 | orchestrator | 2026-03-28 00:50:35.968422 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-28 00:50:35.968430 | orchestrator | Saturday 28 March 2026 00:48:16 +0000 (0:00:02.500) 0:00:36.956 ******** 2026-03-28 00:50:35.968437 | orchestrator | [WARNING]: Skipped 2026-03-28 00:50:35.968444 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-28 00:50:35.968452 | orchestrator | to this access issue: 2026-03-28 00:50:35.968459 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-28 00:50:35.968467 | orchestrator | directory 2026-03-28 00:50:35.968474 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:50:35.968482 | orchestrator | 2026-03-28 00:50:35.968489 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-28 00:50:35.968496 | orchestrator | Saturday 28 March 2026 00:48:17 +0000 (0:00:01.271) 0:00:38.227 ******** 2026-03-28 00:50:35.968510 | orchestrator | [WARNING]: Skipped 2026-03-28 00:50:35.968519 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-28 00:50:35.968527 | orchestrator | to this access issue: 2026-03-28 00:50:35.968535 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-28 00:50:35.968541 | orchestrator | directory 2026-03-28 00:50:35.968554 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:50:35.968560 | orchestrator | 2026-03-28 00:50:35.968566 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-28 00:50:35.968574 | orchestrator | Saturday 28 March 2026 00:48:19 +0000 (0:00:01.427) 0:00:39.655 ******** 2026-03-28 00:50:35.968583 | orchestrator | [WARNING]: Skipped 2026-03-28 00:50:35.968591 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-28 00:50:35.968599 | orchestrator | to this access issue: 2026-03-28 00:50:35.968608 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-28 00:50:35.968616 | orchestrator | directory 2026-03-28 00:50:35.968624 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:50:35.968633 | orchestrator | 2026-03-28 00:50:35.968639 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-28 00:50:35.968644 | orchestrator | Saturday 28 March 2026 00:48:20 +0000 (0:00:01.484) 0:00:41.139 ******** 2026-03-28 00:50:35.968651 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:35.968656 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:35.968662 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:35.968668 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:35.968673 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:35.968679 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:35.968685 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:35.968690 | orchestrator | 2026-03-28 00:50:35.968696 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-28 00:50:35.968702 | orchestrator | Saturday 28 March 2026 00:48:28 +0000 (0:00:07.369) 0:00:48.509 ******** 2026-03-28 00:50:35.968709 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:50:35.968715 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:50:35.968721 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:50:35.968727 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:50:35.968734 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:50:35.968741 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:50:35.968749 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:50:35.968756 | orchestrator | 2026-03-28 00:50:35.968763 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-28 00:50:35.968770 | orchestrator | Saturday 28 March 2026 00:48:33 +0000 (0:00:05.856) 0:00:54.365 ******** 2026-03-28 00:50:35.968776 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:35.968783 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:35.968791 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:35.968798 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:35.968805 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:35.968811 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:35.968818 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:35.968825 | orchestrator | 2026-03-28 00:50:35.968832 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-28 00:50:35.968839 | orchestrator | Saturday 28 March 2026 00:48:38 +0000 (0:00:04.762) 0:00:59.127 ******** 2026-03-28 00:50:35.968851 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.968865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.968879 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.968886 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.968893 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.968901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.968909 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968921 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.968934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.968945 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968953 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968960 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968968 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.968975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.968982 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.968993 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.969006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.969018 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.969026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:50:35.969034 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969041 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969049 | orchestrator | 2026-03-28 00:50:35.969056 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-28 00:50:35.969106 | orchestrator | Saturday 28 March 2026 00:48:42 +0000 (0:00:03.478) 0:01:02.606 ******** 2026-03-28 00:50:35.969113 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:50:35.969121 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:50:35.969128 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:50:35.969136 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:50:35.969143 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:50:35.969150 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:50:35.969161 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:50:35.969169 | orchestrator | 2026-03-28 00:50:35.969176 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-28 00:50:35.969183 | orchestrator | Saturday 28 March 2026 00:48:45 +0000 (0:00:03.275) 0:01:05.881 ******** 2026-03-28 00:50:35.969190 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:50:35.969198 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:50:35.969205 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:50:35.969212 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:50:35.969219 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:50:35.969227 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:50:35.969234 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:50:35.969241 | orchestrator | 2026-03-28 00:50:35.969248 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-28 00:50:35.969256 | orchestrator | Saturday 28 March 2026 00:48:48 +0000 (0:00:02.991) 0:01:08.873 ******** 2026-03-28 00:50:35.969263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.969276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.969284 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.969291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.969299 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.969311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969337 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.969357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969364 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969377 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:50:35.969385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969422 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:50:35.969465 | orchestrator | 2026-03-28 00:50:35.969472 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-28 00:50:35.969479 | orchestrator | Saturday 28 March 2026 00:48:53 +0000 (0:00:05.160) 0:01:14.033 ******** 2026-03-28 00:50:35.969487 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:35.969493 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:35.969499 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:35.969507 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:35.969514 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:35.969521 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:35.969528 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:35.969535 | orchestrator | 2026-03-28 00:50:35.969541 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-28 00:50:35.969547 | orchestrator | Saturday 28 March 2026 00:48:56 +0000 (0:00:03.280) 0:01:17.314 ******** 2026-03-28 00:50:35.969553 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:35.969562 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:35.969570 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:35.969577 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:35.969584 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:35.969591 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:35.969598 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:35.969605 | orchestrator | 2026-03-28 00:50:35.969612 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:50:35.969619 | orchestrator | Saturday 28 March 2026 00:48:59 +0000 (0:00:02.738) 0:01:20.053 ******** 2026-03-28 00:50:35.969625 | orchestrator | 2026-03-28 00:50:35.969633 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:50:35.969640 | orchestrator | Saturday 28 March 2026 00:48:59 +0000 (0:00:00.095) 0:01:20.148 ******** 2026-03-28 00:50:35.969647 | orchestrator | 2026-03-28 00:50:35.969654 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:50:35.969660 | orchestrator | Saturday 28 March 2026 00:48:59 +0000 (0:00:00.086) 0:01:20.234 ******** 2026-03-28 00:50:35.969668 | orchestrator | 2026-03-28 00:50:35.969675 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:50:35.969682 | orchestrator | Saturday 28 March 2026 00:48:59 +0000 (0:00:00.099) 0:01:20.334 ******** 2026-03-28 00:50:35.969689 | orchestrator | 2026-03-28 00:50:35.969696 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:50:35.969703 | orchestrator | Saturday 28 March 2026 00:49:00 +0000 (0:00:00.487) 0:01:20.821 ******** 2026-03-28 00:50:35.969710 | orchestrator | 2026-03-28 00:50:35.969717 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:50:35.969728 | orchestrator | Saturday 28 March 2026 00:49:00 +0000 (0:00:00.202) 0:01:21.024 ******** 2026-03-28 00:50:35.969736 | orchestrator | 2026-03-28 00:50:35.969743 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:50:35.969755 | orchestrator | Saturday 28 March 2026 00:49:00 +0000 (0:00:00.183) 0:01:21.207 ******** 2026-03-28 00:50:35.969762 | orchestrator | 2026-03-28 00:50:35.969769 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-28 00:50:35.969777 | orchestrator | Saturday 28 March 2026 00:49:00 +0000 (0:00:00.152) 0:01:21.359 ******** 2026-03-28 00:50:35.969784 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:35.969791 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:35.969798 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:35.969806 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:35.969813 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:35.969820 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:35.969828 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:35.969835 | orchestrator | 2026-03-28 00:50:35.969842 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-28 00:50:35.969849 | orchestrator | Saturday 28 March 2026 00:49:40 +0000 (0:00:39.103) 0:02:00.463 ******** 2026-03-28 00:50:35.969857 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:35.969864 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:35.969871 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:35.969879 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:35.969886 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:35.969893 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:35.969900 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:35.969908 | orchestrator | 2026-03-28 00:50:35.969915 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-28 00:50:35.969922 | orchestrator | Saturday 28 March 2026 00:50:21 +0000 (0:00:41.251) 0:02:41.715 ******** 2026-03-28 00:50:35.969929 | orchestrator | ok: [testbed-manager] 2026-03-28 00:50:35.969937 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:35.969944 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:35.969951 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:35.969958 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:50:35.969965 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:50:35.969973 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:50:35.969980 | orchestrator | 2026-03-28 00:50:35.969987 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-28 00:50:35.969995 | orchestrator | Saturday 28 March 2026 00:50:23 +0000 (0:00:02.208) 0:02:43.924 ******** 2026-03-28 00:50:35.970002 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:35.970010 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:35.970081 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:35.970089 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:35.970097 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:35.970104 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:35.970112 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:35.970119 | orchestrator | 2026-03-28 00:50:35.970126 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:50:35.970135 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:50:35.970143 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:50:35.970151 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:50:35.970160 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:50:35.970167 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:50:35.970175 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:50:35.970193 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:50:35.970200 | orchestrator | 2026-03-28 00:50:35.970205 | orchestrator | 2026-03-28 00:50:35.970211 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:50:35.970218 | orchestrator | Saturday 28 March 2026 00:50:33 +0000 (0:00:09.539) 0:02:53.463 ******** 2026-03-28 00:50:35.970225 | orchestrator | =============================================================================== 2026-03-28 00:50:35.970232 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 41.25s 2026-03-28 00:50:35.970239 | orchestrator | common : Restart fluentd container ------------------------------------- 39.10s 2026-03-28 00:50:35.970246 | orchestrator | common : Copying over config.json files for services -------------------- 9.59s 2026-03-28 00:50:35.970253 | orchestrator | common : Restart cron container ----------------------------------------- 9.54s 2026-03-28 00:50:35.970260 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 7.37s 2026-03-28 00:50:35.970267 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 7.17s 2026-03-28 00:50:35.970274 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.86s 2026-03-28 00:50:35.970281 | orchestrator | common : Check common containers ---------------------------------------- 5.16s 2026-03-28 00:50:35.970289 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.86s 2026-03-28 00:50:35.970297 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.76s 2026-03-28 00:50:35.970310 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.82s 2026-03-28 00:50:35.970318 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.48s 2026-03-28 00:50:35.970325 | orchestrator | common : Creating log volume -------------------------------------------- 3.28s 2026-03-28 00:50:35.970332 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.27s 2026-03-28 00:50:35.970339 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.99s 2026-03-28 00:50:35.970346 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 2.74s 2026-03-28 00:50:35.970354 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.50s 2026-03-28 00:50:35.970361 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.21s 2026-03-28 00:50:35.970368 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 2.18s 2026-03-28 00:50:35.970375 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.91s 2026-03-28 00:50:35.975398 | orchestrator | 2026-03-28 00:50:35 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:50:35.977359 | orchestrator | 2026-03-28 00:50:35 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:35.978775 | orchestrator | 2026-03-28 00:50:35 | INFO  | Task 77f9fd62-e4cc-4a49-94f4-5e3454354337 is in state STARTED 2026-03-28 00:50:35.981892 | orchestrator | 2026-03-28 00:50:35 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:50:35.982231 | orchestrator | 2026-03-28 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:39.032813 | orchestrator | 2026-03-28 00:50:39 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:50:39.032911 | orchestrator | 2026-03-28 00:50:39 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:39.032918 | orchestrator | 2026-03-28 00:50:39 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:50:39.032951 | orchestrator | 2026-03-28 00:50:39 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:39.035453 | orchestrator | 2026-03-28 00:50:39 | INFO  | Task 77f9fd62-e4cc-4a49-94f4-5e3454354337 is in state STARTED 2026-03-28 00:50:39.040566 | orchestrator | 2026-03-28 00:50:39 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:50:39.040645 | orchestrator | 2026-03-28 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:42.085969 | orchestrator | 2026-03-28 00:50:42 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:50:42.095577 | orchestrator | 2026-03-28 00:50:42 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:42.096180 | orchestrator | 2026-03-28 00:50:42 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:50:42.097118 | orchestrator | 2026-03-28 00:50:42 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:42.098822 | orchestrator | 2026-03-28 00:50:42 | INFO  | Task 77f9fd62-e4cc-4a49-94f4-5e3454354337 is in state STARTED 2026-03-28 00:50:42.099539 | orchestrator | 2026-03-28 00:50:42 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:50:42.099812 | orchestrator | 2026-03-28 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:45.146369 | orchestrator | 2026-03-28 00:50:45 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:50:45.146445 | orchestrator | 2026-03-28 00:50:45 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:45.146463 | orchestrator | 2026-03-28 00:50:45 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:50:45.146475 | orchestrator | 2026-03-28 00:50:45 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:45.146486 | orchestrator | 2026-03-28 00:50:45 | INFO  | Task 77f9fd62-e4cc-4a49-94f4-5e3454354337 is in state STARTED 2026-03-28 00:50:45.146497 | orchestrator | 2026-03-28 00:50:45 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:50:45.146509 | orchestrator | 2026-03-28 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:48.208361 | orchestrator | 2026-03-28 00:50:48 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:50:48.210286 | orchestrator | 2026-03-28 00:50:48 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:48.215928 | orchestrator | 2026-03-28 00:50:48 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:50:48.216642 | orchestrator | 2026-03-28 00:50:48 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:48.218283 | orchestrator | 2026-03-28 00:50:48 | INFO  | Task 77f9fd62-e4cc-4a49-94f4-5e3454354337 is in state STARTED 2026-03-28 00:50:48.220167 | orchestrator | 2026-03-28 00:50:48 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:50:48.220200 | orchestrator | 2026-03-28 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:51.297101 | orchestrator | 2026-03-28 00:50:51 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:50:51.297688 | orchestrator | 2026-03-28 00:50:51 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:51.299340 | orchestrator | 2026-03-28 00:50:51 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:50:51.301496 | orchestrator | 2026-03-28 00:50:51 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:51.303086 | orchestrator | 2026-03-28 00:50:51 | INFO  | Task 77f9fd62-e4cc-4a49-94f4-5e3454354337 is in state STARTED 2026-03-28 00:50:51.307424 | orchestrator | 2026-03-28 00:50:51 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:50:51.307476 | orchestrator | 2026-03-28 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:54.360004 | orchestrator | 2026-03-28 00:50:54 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:50:54.364465 | orchestrator | 2026-03-28 00:50:54 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:54.367458 | orchestrator | 2026-03-28 00:50:54 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:50:54.372387 | orchestrator | 2026-03-28 00:50:54 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:54.376668 | orchestrator | 2026-03-28 00:50:54 | INFO  | Task 77f9fd62-e4cc-4a49-94f4-5e3454354337 is in state STARTED 2026-03-28 00:50:54.379822 | orchestrator | 2026-03-28 00:50:54 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:50:54.380026 | orchestrator | 2026-03-28 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:57.457719 | orchestrator | 2026-03-28 00:50:57 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:50:57.460513 | orchestrator | 2026-03-28 00:50:57 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:50:57.461467 | orchestrator | 2026-03-28 00:50:57 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:50:57.462927 | orchestrator | 2026-03-28 00:50:57 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:50:57.464241 | orchestrator | 2026-03-28 00:50:57 | INFO  | Task 77f9fd62-e4cc-4a49-94f4-5e3454354337 is in state STARTED 2026-03-28 00:50:57.465989 | orchestrator | 2026-03-28 00:50:57 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:50:57.468102 | orchestrator | 2026-03-28 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:00.532602 | orchestrator | 2026-03-28 00:51:00 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:00.532696 | orchestrator | 2026-03-28 00:51:00 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:00.532708 | orchestrator | 2026-03-28 00:51:00 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:00.532715 | orchestrator | 2026-03-28 00:51:00 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:00.532723 | orchestrator | 2026-03-28 00:51:00 | INFO  | Task 77f9fd62-e4cc-4a49-94f4-5e3454354337 is in state STARTED 2026-03-28 00:51:00.532730 | orchestrator | 2026-03-28 00:51:00 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:51:00.532743 | orchestrator | 2026-03-28 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:03.611611 | orchestrator | 2026-03-28 00:51:03 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:03.613076 | orchestrator | 2026-03-28 00:51:03 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:03.614659 | orchestrator | 2026-03-28 00:51:03 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:03.616104 | orchestrator | 2026-03-28 00:51:03 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:03.618113 | orchestrator | 2026-03-28 00:51:03 | INFO  | Task 77f9fd62-e4cc-4a49-94f4-5e3454354337 is in state SUCCESS 2026-03-28 00:51:03.619001 | orchestrator | 2026-03-28 00:51:03 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:51:03.619065 | orchestrator | 2026-03-28 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:06.678288 | orchestrator | 2026-03-28 00:51:06 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:06.685294 | orchestrator | 2026-03-28 00:51:06 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:06.685342 | orchestrator | 2026-03-28 00:51:06 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:06.685348 | orchestrator | 2026-03-28 00:51:06 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:06.685353 | orchestrator | 2026-03-28 00:51:06 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:06.685358 | orchestrator | 2026-03-28 00:51:06 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:51:06.685362 | orchestrator | 2026-03-28 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:09.730583 | orchestrator | 2026-03-28 00:51:09 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:09.731526 | orchestrator | 2026-03-28 00:51:09 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:09.732483 | orchestrator | 2026-03-28 00:51:09 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:09.734810 | orchestrator | 2026-03-28 00:51:09 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:09.735939 | orchestrator | 2026-03-28 00:51:09 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:09.736717 | orchestrator | 2026-03-28 00:51:09 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:51:09.736754 | orchestrator | 2026-03-28 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:12.857410 | orchestrator | 2026-03-28 00:51:12 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:12.857502 | orchestrator | 2026-03-28 00:51:12 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:12.857679 | orchestrator | 2026-03-28 00:51:12 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:12.859201 | orchestrator | 2026-03-28 00:51:12 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:12.864266 | orchestrator | 2026-03-28 00:51:12 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:12.867364 | orchestrator | 2026-03-28 00:51:12 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:51:12.867455 | orchestrator | 2026-03-28 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:15.977418 | orchestrator | 2026-03-28 00:51:15 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:15.977506 | orchestrator | 2026-03-28 00:51:15 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:15.977527 | orchestrator | 2026-03-28 00:51:15 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:15.977544 | orchestrator | 2026-03-28 00:51:15 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:15.977588 | orchestrator | 2026-03-28 00:51:15 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:15.977605 | orchestrator | 2026-03-28 00:51:15 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state STARTED 2026-03-28 00:51:15.977620 | orchestrator | 2026-03-28 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:18.996048 | orchestrator | 2026-03-28 00:51:18 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:18.997180 | orchestrator | 2026-03-28 00:51:18 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:18.999255 | orchestrator | 2026-03-28 00:51:18 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:19.001406 | orchestrator | 2026-03-28 00:51:19 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:19.014600 | orchestrator | 2026-03-28 00:51:19 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:19.016483 | orchestrator | 2026-03-28 00:51:19 | INFO  | Task 14f139a7-2468-4ecd-9276-651fd1f7845a is in state SUCCESS 2026-03-28 00:51:19.018427 | orchestrator | 2026-03-28 00:51:19.018497 | orchestrator | 2026-03-28 00:51:19.018517 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:51:19.018536 | orchestrator | 2026-03-28 00:51:19.018552 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:51:19.018569 | orchestrator | Saturday 28 March 2026 00:50:43 +0000 (0:00:00.389) 0:00:00.389 ******** 2026-03-28 00:51:19.018587 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:51:19.018607 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:51:19.018625 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:51:19.018643 | orchestrator | 2026-03-28 00:51:19.018662 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:51:19.018681 | orchestrator | Saturday 28 March 2026 00:50:44 +0000 (0:00:00.829) 0:00:01.219 ******** 2026-03-28 00:51:19.018698 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-28 00:51:19.018710 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-28 00:51:19.018721 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-28 00:51:19.018732 | orchestrator | 2026-03-28 00:51:19.018743 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-28 00:51:19.018754 | orchestrator | 2026-03-28 00:51:19.018765 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-28 00:51:19.018776 | orchestrator | Saturday 28 March 2026 00:50:45 +0000 (0:00:00.866) 0:00:02.085 ******** 2026-03-28 00:51:19.018788 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:51:19.018800 | orchestrator | 2026-03-28 00:51:19.018811 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-28 00:51:19.018822 | orchestrator | Saturday 28 March 2026 00:50:46 +0000 (0:00:01.109) 0:00:03.194 ******** 2026-03-28 00:51:19.018833 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-28 00:51:19.018844 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-28 00:51:19.018855 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-28 00:51:19.018866 | orchestrator | 2026-03-28 00:51:19.018876 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-28 00:51:19.018887 | orchestrator | Saturday 28 March 2026 00:50:47 +0000 (0:00:01.325) 0:00:04.520 ******** 2026-03-28 00:51:19.018898 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-28 00:51:19.018909 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-28 00:51:19.018920 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-28 00:51:19.018954 | orchestrator | 2026-03-28 00:51:19.018965 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-28 00:51:19.018976 | orchestrator | Saturday 28 March 2026 00:50:50 +0000 (0:00:03.458) 0:00:07.979 ******** 2026-03-28 00:51:19.018987 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:51:19.018998 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:51:19.019045 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:51:19.019059 | orchestrator | 2026-03-28 00:51:19.019071 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-28 00:51:19.019083 | orchestrator | Saturday 28 March 2026 00:50:54 +0000 (0:00:03.182) 0:00:11.161 ******** 2026-03-28 00:51:19.019096 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:51:19.019108 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:51:19.019121 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:51:19.019134 | orchestrator | 2026-03-28 00:51:19.019147 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:51:19.019160 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:51:19.019185 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:51:19.019198 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:51:19.019211 | orchestrator | 2026-03-28 00:51:19.019224 | orchestrator | 2026-03-28 00:51:19.019236 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:51:19.019249 | orchestrator | Saturday 28 March 2026 00:51:02 +0000 (0:00:07.912) 0:00:19.074 ******** 2026-03-28 00:51:19.019261 | orchestrator | =============================================================================== 2026-03-28 00:51:19.019273 | orchestrator | memcached : Restart memcached container --------------------------------- 7.91s 2026-03-28 00:51:19.019286 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.46s 2026-03-28 00:51:19.019298 | orchestrator | memcached : Check memcached container ----------------------------------- 3.18s 2026-03-28 00:51:19.019311 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.33s 2026-03-28 00:51:19.019324 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.11s 2026-03-28 00:51:19.019336 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2026-03-28 00:51:19.019349 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.83s 2026-03-28 00:51:19.019361 | orchestrator | 2026-03-28 00:51:19.019374 | orchestrator | 2026-03-28 00:51:19.019385 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:51:19.019396 | orchestrator | 2026-03-28 00:51:19.019406 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:51:19.019417 | orchestrator | Saturday 28 March 2026 00:50:43 +0000 (0:00:00.557) 0:00:00.557 ******** 2026-03-28 00:51:19.019428 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:51:19.019439 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:51:19.019450 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:51:19.019461 | orchestrator | 2026-03-28 00:51:19.019472 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:51:19.019497 | orchestrator | Saturday 28 March 2026 00:50:43 +0000 (0:00:00.685) 0:00:01.242 ******** 2026-03-28 00:51:19.019509 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-28 00:51:19.019520 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-28 00:51:19.019531 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-28 00:51:19.019542 | orchestrator | 2026-03-28 00:51:19.019553 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-28 00:51:19.019564 | orchestrator | 2026-03-28 00:51:19.019576 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-28 00:51:19.019596 | orchestrator | Saturday 28 March 2026 00:50:44 +0000 (0:00:01.002) 0:00:02.245 ******** 2026-03-28 00:51:19.019607 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:51:19.019618 | orchestrator | 2026-03-28 00:51:19.019629 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-28 00:51:19.019639 | orchestrator | Saturday 28 March 2026 00:50:46 +0000 (0:00:01.309) 0:00:03.555 ******** 2026-03-28 00:51:19.019654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019755 | orchestrator | 2026-03-28 00:51:19.019766 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-28 00:51:19.019777 | orchestrator | Saturday 28 March 2026 00:50:48 +0000 (0:00:02.533) 0:00:06.088 ******** 2026-03-28 00:51:19.019789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019889 | orchestrator | 2026-03-28 00:51:19.019901 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-28 00:51:19.019912 | orchestrator | Saturday 28 March 2026 00:50:53 +0000 (0:00:04.597) 0:00:10.686 ******** 2026-03-28 00:51:19.019924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.019974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.020046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.020069 | orchestrator | 2026-03-28 00:51:19.020087 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-28 00:51:19.020103 | orchestrator | Saturday 28 March 2026 00:50:58 +0000 (0:00:04.915) 0:00:15.602 ******** 2026-03-28 00:51:19.020122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.020142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.020164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.020192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.020211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.020245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:51:19.020257 | orchestrator | 2026-03-28 00:51:19.020269 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 00:51:19.020280 | orchestrator | Saturday 28 March 2026 00:51:00 +0000 (0:00:02.041) 0:00:17.644 ******** 2026-03-28 00:51:19.020291 | orchestrator | 2026-03-28 00:51:19.020302 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 00:51:19.020313 | orchestrator | Saturday 28 March 2026 00:51:00 +0000 (0:00:00.276) 0:00:17.920 ******** 2026-03-28 00:51:19.020332 | orchestrator | 2026-03-28 00:51:19.020350 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 00:51:19.020369 | orchestrator | Saturday 28 March 2026 00:51:00 +0000 (0:00:00.111) 0:00:18.031 ******** 2026-03-28 00:51:19.020386 | orchestrator | 2026-03-28 00:51:19.020403 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-28 00:51:19.020422 | orchestrator | Saturday 28 March 2026 00:51:00 +0000 (0:00:00.098) 0:00:18.130 ******** 2026-03-28 00:51:19.020447 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:51:19.020467 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:51:19.020486 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:51:19.020504 | orchestrator | 2026-03-28 00:51:19.020523 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-28 00:51:19.020541 | orchestrator | Saturday 28 March 2026 00:51:06 +0000 (0:00:06.034) 0:00:24.165 ******** 2026-03-28 00:51:19.020560 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:51:19.020572 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:51:19.020582 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:51:19.020593 | orchestrator | 2026-03-28 00:51:19.020603 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:51:19.020615 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:51:19.020626 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:51:19.020637 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:51:19.020648 | orchestrator | 2026-03-28 00:51:19.020659 | orchestrator | 2026-03-28 00:51:19.020670 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:51:19.020681 | orchestrator | Saturday 28 March 2026 00:51:15 +0000 (0:00:08.917) 0:00:33.082 ******** 2026-03-28 00:51:19.020691 | orchestrator | =============================================================================== 2026-03-28 00:51:19.020702 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.92s 2026-03-28 00:51:19.020713 | orchestrator | redis : Restart redis container ----------------------------------------- 6.03s 2026-03-28 00:51:19.020724 | orchestrator | redis : Copying over redis config files --------------------------------- 4.92s 2026-03-28 00:51:19.020734 | orchestrator | redis : Copying over default config.json files -------------------------- 4.60s 2026-03-28 00:51:19.020745 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.53s 2026-03-28 00:51:19.020768 | orchestrator | redis : Check redis containers ------------------------------------------ 2.04s 2026-03-28 00:51:19.020786 | orchestrator | redis : include_tasks --------------------------------------------------- 1.31s 2026-03-28 00:51:19.020812 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.00s 2026-03-28 00:51:19.020829 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2026-03-28 00:51:19.020847 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.49s 2026-03-28 00:51:19.020932 | orchestrator | 2026-03-28 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:22.056648 | orchestrator | 2026-03-28 00:51:22 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:22.058290 | orchestrator | 2026-03-28 00:51:22 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:22.059185 | orchestrator | 2026-03-28 00:51:22 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:22.061928 | orchestrator | 2026-03-28 00:51:22 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:22.063272 | orchestrator | 2026-03-28 00:51:22 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:22.063301 | orchestrator | 2026-03-28 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:25.111165 | orchestrator | 2026-03-28 00:51:25 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:25.113742 | orchestrator | 2026-03-28 00:51:25 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:25.114829 | orchestrator | 2026-03-28 00:51:25 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:25.115776 | orchestrator | 2026-03-28 00:51:25 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:25.117774 | orchestrator | 2026-03-28 00:51:25 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:25.118105 | orchestrator | 2026-03-28 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:28.149783 | orchestrator | 2026-03-28 00:51:28 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:28.150485 | orchestrator | 2026-03-28 00:51:28 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:28.151798 | orchestrator | 2026-03-28 00:51:28 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:28.152572 | orchestrator | 2026-03-28 00:51:28 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:28.153757 | orchestrator | 2026-03-28 00:51:28 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:28.153792 | orchestrator | 2026-03-28 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:31.191783 | orchestrator | 2026-03-28 00:51:31 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:31.195214 | orchestrator | 2026-03-28 00:51:31 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:31.196093 | orchestrator | 2026-03-28 00:51:31 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:31.197275 | orchestrator | 2026-03-28 00:51:31 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:31.200162 | orchestrator | 2026-03-28 00:51:31 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:31.200229 | orchestrator | 2026-03-28 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:34.291109 | orchestrator | 2026-03-28 00:51:34 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:34.297907 | orchestrator | 2026-03-28 00:51:34 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:34.299245 | orchestrator | 2026-03-28 00:51:34 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:34.302933 | orchestrator | 2026-03-28 00:51:34 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:34.304476 | orchestrator | 2026-03-28 00:51:34 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:34.304655 | orchestrator | 2026-03-28 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:37.359954 | orchestrator | 2026-03-28 00:51:37 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:37.361458 | orchestrator | 2026-03-28 00:51:37 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:37.362843 | orchestrator | 2026-03-28 00:51:37 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:37.363808 | orchestrator | 2026-03-28 00:51:37 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:37.365394 | orchestrator | 2026-03-28 00:51:37 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:37.365425 | orchestrator | 2026-03-28 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:40.412393 | orchestrator | 2026-03-28 00:51:40 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:40.427949 | orchestrator | 2026-03-28 00:51:40 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:40.432590 | orchestrator | 2026-03-28 00:51:40 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:40.433336 | orchestrator | 2026-03-28 00:51:40 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:40.433889 | orchestrator | 2026-03-28 00:51:40 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:40.433922 | orchestrator | 2026-03-28 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:43.486582 | orchestrator | 2026-03-28 00:51:43 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:43.486669 | orchestrator | 2026-03-28 00:51:43 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:43.487579 | orchestrator | 2026-03-28 00:51:43 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:43.489066 | orchestrator | 2026-03-28 00:51:43 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:43.489087 | orchestrator | 2026-03-28 00:51:43 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:43.489098 | orchestrator | 2026-03-28 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:46.526617 | orchestrator | 2026-03-28 00:51:46 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:46.532390 | orchestrator | 2026-03-28 00:51:46 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:46.536724 | orchestrator | 2026-03-28 00:51:46 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:46.538578 | orchestrator | 2026-03-28 00:51:46 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:46.542450 | orchestrator | 2026-03-28 00:51:46 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:46.542513 | orchestrator | 2026-03-28 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:49.577783 | orchestrator | 2026-03-28 00:51:49 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:49.581520 | orchestrator | 2026-03-28 00:51:49 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:49.582403 | orchestrator | 2026-03-28 00:51:49 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:49.585044 | orchestrator | 2026-03-28 00:51:49 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:49.586879 | orchestrator | 2026-03-28 00:51:49 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:49.586937 | orchestrator | 2026-03-28 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:52.622333 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:52.622752 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:52.623613 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:52.624324 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:52.625216 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:52.625273 | orchestrator | 2026-03-28 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:55.679674 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:55.680058 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:55.681178 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:55.681954 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:55.683539 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:55.683589 | orchestrator | 2026-03-28 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:58.739466 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:51:58.740201 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:51:58.742265 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:51:58.743118 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:51:58.746132 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:51:58.746206 | orchestrator | 2026-03-28 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:01.790007 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:01.790198 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:52:01.790517 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state STARTED 2026-03-28 00:52:01.793732 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:01.795143 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:01.795189 | orchestrator | 2026-03-28 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:04.843145 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:04.844799 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:52:04.846682 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task a766a616-d674-4274-87dc-ce366c16edbc is in state SUCCESS 2026-03-28 00:52:04.848487 | orchestrator | 2026-03-28 00:52:04.848545 | orchestrator | 2026-03-28 00:52:04.848556 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:52:04.848567 | orchestrator | 2026-03-28 00:52:04.848575 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:52:04.848584 | orchestrator | Saturday 28 March 2026 00:50:43 +0000 (0:00:00.526) 0:00:00.526 ******** 2026-03-28 00:52:04.848593 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:04.848602 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:04.848611 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:04.848619 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:04.848627 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:04.848635 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:04.848643 | orchestrator | 2026-03-28 00:52:04.848651 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:52:04.848660 | orchestrator | Saturday 28 March 2026 00:50:45 +0000 (0:00:01.694) 0:00:02.220 ******** 2026-03-28 00:52:04.848668 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:52:04.848676 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:52:04.848684 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:52:04.848692 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:52:04.848700 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:52:04.848708 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:52:04.848716 | orchestrator | 2026-03-28 00:52:04.848724 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-28 00:52:04.848732 | orchestrator | 2026-03-28 00:52:04.848740 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-28 00:52:04.848748 | orchestrator | Saturday 28 March 2026 00:50:46 +0000 (0:00:01.413) 0:00:03.634 ******** 2026-03-28 00:52:04.848757 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:52:04.848767 | orchestrator | 2026-03-28 00:52:04.848775 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 00:52:04.848783 | orchestrator | Saturday 28 March 2026 00:50:49 +0000 (0:00:02.387) 0:00:06.021 ******** 2026-03-28 00:52:04.848791 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-28 00:52:04.848800 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-28 00:52:04.848822 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-28 00:52:04.848831 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-28 00:52:04.848839 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-28 00:52:04.848847 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-28 00:52:04.848874 | orchestrator | 2026-03-28 00:52:04.848882 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 00:52:04.848891 | orchestrator | Saturday 28 March 2026 00:50:52 +0000 (0:00:02.938) 0:00:08.960 ******** 2026-03-28 00:52:04.848899 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-28 00:52:04.848907 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-28 00:52:04.848914 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-28 00:52:04.848922 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-28 00:52:04.848930 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-28 00:52:04.848938 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-28 00:52:04.849023 | orchestrator | 2026-03-28 00:52:04.849035 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 00:52:04.849044 | orchestrator | Saturday 28 March 2026 00:50:54 +0000 (0:00:02.776) 0:00:11.736 ******** 2026-03-28 00:52:04.849053 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-28 00:52:04.849063 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:04.849073 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-28 00:52:04.849083 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:04.849092 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-28 00:52:04.849102 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:04.849111 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-28 00:52:04.849120 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:04.849130 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-28 00:52:04.849139 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:04.849148 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-28 00:52:04.849157 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:04.849166 | orchestrator | 2026-03-28 00:52:04.849176 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-28 00:52:04.849185 | orchestrator | Saturday 28 March 2026 00:50:57 +0000 (0:00:03.043) 0:00:14.780 ******** 2026-03-28 00:52:04.849194 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:04.849203 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:04.849212 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:04.849221 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:04.849231 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:04.849240 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:04.849249 | orchestrator | 2026-03-28 00:52:04.849258 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-28 00:52:04.849267 | orchestrator | Saturday 28 March 2026 00:50:59 +0000 (0:00:01.341) 0:00:16.122 ******** 2026-03-28 00:52:04.849295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849343 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849387 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849405 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849425 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849448 | orchestrator | 2026-03-28 00:52:04.849456 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-28 00:52:04.849465 | orchestrator | Saturday 28 March 2026 00:51:01 +0000 (0:00:02.455) 0:00:18.578 ******** 2026-03-28 00:52:04.849473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849517 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849609 | orchestrator | 2026-03-28 00:52:04.849617 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-28 00:52:04.849626 | orchestrator | Saturday 28 March 2026 00:51:06 +0000 (0:00:04.354) 0:00:22.933 ******** 2026-03-28 00:52:04.849634 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:04.849642 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:04.849650 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:04.849658 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:04.849666 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:04.849674 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:04.849682 | orchestrator | 2026-03-28 00:52:04.849690 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-28 00:52:04.849698 | orchestrator | Saturday 28 March 2026 00:51:07 +0000 (0:00:01.813) 0:00:24.746 ******** 2026-03-28 00:52:04.849712 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849794 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849833 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849842 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:52:04.849850 | orchestrator | 2026-03-28 00:52:04.849858 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:52:04.849870 | orchestrator | Saturday 28 March 2026 00:51:12 +0000 (0:00:04.423) 0:00:29.170 ******** 2026-03-28 00:52:04.849879 | orchestrator | 2026-03-28 00:52:04.849887 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:52:04.849895 | orchestrator | Saturday 28 March 2026 00:51:12 +0000 (0:00:00.495) 0:00:29.665 ******** 2026-03-28 00:52:04.849903 | orchestrator | 2026-03-28 00:52:04.849911 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:52:04.849919 | orchestrator | Saturday 28 March 2026 00:51:13 +0000 (0:00:00.351) 0:00:30.016 ******** 2026-03-28 00:52:04.849927 | orchestrator | 2026-03-28 00:52:04.849935 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:52:04.849943 | orchestrator | Saturday 28 March 2026 00:51:13 +0000 (0:00:00.370) 0:00:30.387 ******** 2026-03-28 00:52:04.849975 | orchestrator | 2026-03-28 00:52:04.849988 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:52:04.850002 | orchestrator | Saturday 28 March 2026 00:51:13 +0000 (0:00:00.338) 0:00:30.725 ******** 2026-03-28 00:52:04.850073 | orchestrator | 2026-03-28 00:52:04.850084 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:52:04.850092 | orchestrator | Saturday 28 March 2026 00:51:14 +0000 (0:00:00.270) 0:00:30.996 ******** 2026-03-28 00:52:04.850100 | orchestrator | 2026-03-28 00:52:04.850108 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-28 00:52:04.850117 | orchestrator | Saturday 28 March 2026 00:51:14 +0000 (0:00:00.357) 0:00:31.353 ******** 2026-03-28 00:52:04.850125 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:04.850143 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:04.850151 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:04.850160 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:04.850168 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:04.850176 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:04.850183 | orchestrator | 2026-03-28 00:52:04.850191 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-28 00:52:04.850200 | orchestrator | Saturday 28 March 2026 00:51:25 +0000 (0:00:10.955) 0:00:42.309 ******** 2026-03-28 00:52:04.850208 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:04.850216 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:04.850224 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:04.850232 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:04.850240 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:04.850247 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:04.850255 | orchestrator | 2026-03-28 00:52:04.850263 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-28 00:52:04.850271 | orchestrator | Saturday 28 March 2026 00:51:26 +0000 (0:00:01.370) 0:00:43.680 ******** 2026-03-28 00:52:04.850518 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:04.850530 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:04.850538 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:04.850546 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:04.850554 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:04.850563 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:04.850571 | orchestrator | 2026-03-28 00:52:04.850579 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-28 00:52:04.850587 | orchestrator | Saturday 28 March 2026 00:51:37 +0000 (0:00:10.899) 0:00:54.579 ******** 2026-03-28 00:52:04.850603 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-28 00:52:04.850612 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-28 00:52:04.850620 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-28 00:52:04.850667 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-28 00:52:04.850675 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-28 00:52:04.850683 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-28 00:52:04.850691 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-28 00:52:04.850699 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-28 00:52:04.850707 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-28 00:52:04.850715 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-28 00:52:04.850723 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-28 00:52:04.850731 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-28 00:52:04.850739 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:52:04.850747 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:52:04.850755 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:52:04.850771 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:52:04.850785 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:52:04.850793 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:52:04.850801 | orchestrator | 2026-03-28 00:52:04.850809 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-28 00:52:04.850817 | orchestrator | Saturday 28 March 2026 00:51:45 +0000 (0:00:07.992) 0:01:02.571 ******** 2026-03-28 00:52:04.850826 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-28 00:52:04.850834 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:04.850842 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-28 00:52:04.850849 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:04.850857 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-28 00:52:04.850865 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:04.850874 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-28 00:52:04.850882 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-28 00:52:04.850890 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-28 00:52:04.850898 | orchestrator | 2026-03-28 00:52:04.850906 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-28 00:52:04.850914 | orchestrator | Saturday 28 March 2026 00:51:48 +0000 (0:00:02.813) 0:01:05.385 ******** 2026-03-28 00:52:04.850922 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-28 00:52:04.850930 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:04.850938 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-28 00:52:04.850968 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:04.850980 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-28 00:52:04.850989 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:04.850997 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-28 00:52:04.851005 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-28 00:52:04.851013 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-28 00:52:04.851021 | orchestrator | 2026-03-28 00:52:04.851029 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-28 00:52:04.851037 | orchestrator | Saturday 28 March 2026 00:51:52 +0000 (0:00:04.258) 0:01:09.643 ******** 2026-03-28 00:52:04.851045 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:04.851054 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:04.851064 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:04.851072 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:04.851082 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:04.851091 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:04.851101 | orchestrator | 2026-03-28 00:52:04.851110 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:52:04.851120 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 00:52:04.851136 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 00:52:04.851146 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 00:52:04.851155 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 00:52:04.851165 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 00:52:04.851180 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 00:52:04.851189 | orchestrator | 2026-03-28 00:52:04.851199 | orchestrator | 2026-03-28 00:52:04.851208 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:52:04.851218 | orchestrator | Saturday 28 March 2026 00:52:02 +0000 (0:00:09.194) 0:01:18.838 ******** 2026-03-28 00:52:04.851227 | orchestrator | =============================================================================== 2026-03-28 00:52:04.851236 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.09s 2026-03-28 00:52:04.851245 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.96s 2026-03-28 00:52:04.851254 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.99s 2026-03-28 00:52:04.851264 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 4.42s 2026-03-28 00:52:04.851272 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.35s 2026-03-28 00:52:04.851282 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.26s 2026-03-28 00:52:04.851291 | orchestrator | module-load : Drop module persistence ----------------------------------- 3.04s 2026-03-28 00:52:04.851300 | orchestrator | module-load : Load modules ---------------------------------------------- 2.94s 2026-03-28 00:52:04.851309 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.81s 2026-03-28 00:52:04.851319 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.78s 2026-03-28 00:52:04.851328 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.46s 2026-03-28 00:52:04.851341 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.39s 2026-03-28 00:52:04.851351 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.18s 2026-03-28 00:52:04.851360 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.81s 2026-03-28 00:52:04.851370 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.69s 2026-03-28 00:52:04.851379 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.41s 2026-03-28 00:52:04.851389 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.37s 2026-03-28 00:52:04.851398 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.34s 2026-03-28 00:52:04.851408 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:04.851494 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:04.851505 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:04.851513 | orchestrator | 2026-03-28 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:07.893274 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:07.893872 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:52:07.894652 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:07.895529 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:07.896507 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:07.896557 | orchestrator | 2026-03-28 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:10.944599 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:10.944740 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:52:10.946114 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:10.948414 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:10.949234 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:10.949569 | orchestrator | 2026-03-28 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:14.019979 | orchestrator | 2026-03-28 00:52:13 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:14.020039 | orchestrator | 2026-03-28 00:52:13 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:52:14.020047 | orchestrator | 2026-03-28 00:52:13 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:14.020054 | orchestrator | 2026-03-28 00:52:13 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:14.020060 | orchestrator | 2026-03-28 00:52:13 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:14.020067 | orchestrator | 2026-03-28 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:17.076308 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:17.077236 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:52:17.078614 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:17.080911 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:17.083275 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:17.083321 | orchestrator | 2026-03-28 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:20.137270 | orchestrator | 2026-03-28 00:52:20 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:20.137883 | orchestrator | 2026-03-28 00:52:20 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:52:20.138636 | orchestrator | 2026-03-28 00:52:20 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:20.143071 | orchestrator | 2026-03-28 00:52:20 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:20.144128 | orchestrator | 2026-03-28 00:52:20 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:20.144179 | orchestrator | 2026-03-28 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:23.239982 | orchestrator | 2026-03-28 00:52:23 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:23.240114 | orchestrator | 2026-03-28 00:52:23 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:52:23.240141 | orchestrator | 2026-03-28 00:52:23 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:23.240160 | orchestrator | 2026-03-28 00:52:23 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:23.240179 | orchestrator | 2026-03-28 00:52:23 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:23.242689 | orchestrator | 2026-03-28 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:26.450881 | orchestrator | 2026-03-28 00:52:26 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:26.451004 | orchestrator | 2026-03-28 00:52:26 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:52:26.451018 | orchestrator | 2026-03-28 00:52:26 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:26.451029 | orchestrator | 2026-03-28 00:52:26 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:26.451039 | orchestrator | 2026-03-28 00:52:26 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:26.451050 | orchestrator | 2026-03-28 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:29.565442 | orchestrator | 2026-03-28 00:52:29 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:29.565527 | orchestrator | 2026-03-28 00:52:29 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:52:29.566076 | orchestrator | 2026-03-28 00:52:29 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:29.568012 | orchestrator | 2026-03-28 00:52:29 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:29.570972 | orchestrator | 2026-03-28 00:52:29 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:29.571009 | orchestrator | 2026-03-28 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:32.684398 | orchestrator | 2026-03-28 00:52:32 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:32.684459 | orchestrator | 2026-03-28 00:52:32 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:52:32.684468 | orchestrator | 2026-03-28 00:52:32 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:32.684475 | orchestrator | 2026-03-28 00:52:32 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:32.684482 | orchestrator | 2026-03-28 00:52:32 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:32.684489 | orchestrator | 2026-03-28 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:35.680603 | orchestrator | 2026-03-28 00:52:35 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:35.682207 | orchestrator | 2026-03-28 00:52:35 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state STARTED 2026-03-28 00:52:35.683207 | orchestrator | 2026-03-28 00:52:35 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:35.684493 | orchestrator | 2026-03-28 00:52:35 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:35.686005 | orchestrator | 2026-03-28 00:52:35 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:35.686156 | orchestrator | 2026-03-28 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:38.722757 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:38.726591 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task d2ddb3fa-7495-46ae-a566-c98787265bf1 is in state SUCCESS 2026-03-28 00:52:38.726673 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:38.727628 | orchestrator | 2026-03-28 00:52:38.727688 | orchestrator | 2026-03-28 00:52:38.727764 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-28 00:52:38.727786 | orchestrator | 2026-03-28 00:52:38.727806 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-28 00:52:38.727823 | orchestrator | Saturday 28 March 2026 00:47:40 +0000 (0:00:00.209) 0:00:00.209 ******** 2026-03-28 00:52:38.727840 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:38.727859 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:38.727875 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:38.727892 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.727939 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.727957 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.727974 | orchestrator | 2026-03-28 00:52:38.727990 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-28 00:52:38.728008 | orchestrator | Saturday 28 March 2026 00:47:41 +0000 (0:00:01.082) 0:00:01.292 ******** 2026-03-28 00:52:38.728025 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.728045 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.728062 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.728080 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.728098 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.728116 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.728134 | orchestrator | 2026-03-28 00:52:38.728152 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-28 00:52:38.728172 | orchestrator | Saturday 28 March 2026 00:47:42 +0000 (0:00:00.896) 0:00:02.188 ******** 2026-03-28 00:52:38.728190 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.728209 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.728230 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.728248 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.728266 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.728279 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.728291 | orchestrator | 2026-03-28 00:52:38.728302 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-28 00:52:38.728313 | orchestrator | Saturday 28 March 2026 00:47:42 +0000 (0:00:00.737) 0:00:02.925 ******** 2026-03-28 00:52:38.728324 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:38.728335 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.728346 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:38.728357 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.728367 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:38.728378 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.728388 | orchestrator | 2026-03-28 00:52:38.728399 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-28 00:52:38.728410 | orchestrator | Saturday 28 March 2026 00:47:44 +0000 (0:00:01.843) 0:00:04.769 ******** 2026-03-28 00:52:38.728421 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:38.728432 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:38.728442 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.728453 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.728463 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.728474 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:38.728484 | orchestrator | 2026-03-28 00:52:38.728495 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-28 00:52:38.728506 | orchestrator | Saturday 28 March 2026 00:47:46 +0000 (0:00:01.786) 0:00:06.555 ******** 2026-03-28 00:52:38.728517 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:38.728527 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:38.728538 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:38.728549 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.728559 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.728570 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.728581 | orchestrator | 2026-03-28 00:52:38.728612 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-28 00:52:38.728623 | orchestrator | Saturday 28 March 2026 00:47:47 +0000 (0:00:01.064) 0:00:07.620 ******** 2026-03-28 00:52:38.728634 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.728645 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.728655 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.728666 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.728677 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.728687 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.728698 | orchestrator | 2026-03-28 00:52:38.728709 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-28 00:52:38.728721 | orchestrator | Saturday 28 March 2026 00:47:49 +0000 (0:00:01.417) 0:00:09.037 ******** 2026-03-28 00:52:38.728731 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.728742 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.728753 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.728764 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.728775 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.728785 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.728796 | orchestrator | 2026-03-28 00:52:38.728807 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-28 00:52:38.728818 | orchestrator | Saturday 28 March 2026 00:47:49 +0000 (0:00:00.640) 0:00:09.678 ******** 2026-03-28 00:52:38.728829 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:52:38.728839 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:52:38.728850 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.728861 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:52:38.728872 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:52:38.728883 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.728893 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:52:38.728931 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:52:38.728943 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.728966 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:52:38.728996 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:52:38.729008 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.729019 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:52:38.729029 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:52:38.729040 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.729051 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:52:38.729062 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:52:38.729072 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.729083 | orchestrator | 2026-03-28 00:52:38.729094 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-28 00:52:38.729105 | orchestrator | Saturday 28 March 2026 00:47:50 +0000 (0:00:00.956) 0:00:10.634 ******** 2026-03-28 00:52:38.729116 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.729127 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.729138 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.729149 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.729160 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.729171 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.729181 | orchestrator | 2026-03-28 00:52:38.729192 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-28 00:52:38.729212 | orchestrator | Saturday 28 March 2026 00:47:52 +0000 (0:00:01.416) 0:00:12.051 ******** 2026-03-28 00:52:38.729223 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:38.729234 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:38.729245 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:38.729256 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.729266 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.729277 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.729288 | orchestrator | 2026-03-28 00:52:38.729299 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-28 00:52:38.729310 | orchestrator | Saturday 28 March 2026 00:47:52 +0000 (0:00:00.879) 0:00:12.931 ******** 2026-03-28 00:52:38.729320 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.729331 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:38.729342 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:38.729353 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.729364 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:38.729374 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.729385 | orchestrator | 2026-03-28 00:52:38.729396 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-28 00:52:38.729413 | orchestrator | Saturday 28 March 2026 00:47:57 +0000 (0:00:04.842) 0:00:17.773 ******** 2026-03-28 00:52:38.729431 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.729448 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.729464 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.729482 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.729497 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.729515 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.729532 | orchestrator | 2026-03-28 00:52:38.729548 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-28 00:52:38.729566 | orchestrator | Saturday 28 March 2026 00:48:00 +0000 (0:00:02.235) 0:00:20.009 ******** 2026-03-28 00:52:38.729585 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.729602 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.729621 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.729638 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.729656 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.729675 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.729692 | orchestrator | 2026-03-28 00:52:38.729711 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-28 00:52:38.729730 | orchestrator | Saturday 28 March 2026 00:48:03 +0000 (0:00:03.765) 0:00:23.774 ******** 2026-03-28 00:52:38.729750 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.729769 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.729787 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.729805 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.729816 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.729826 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.729837 | orchestrator | 2026-03-28 00:52:38.729847 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-28 00:52:38.729858 | orchestrator | Saturday 28 March 2026 00:48:04 +0000 (0:00:00.793) 0:00:24.568 ******** 2026-03-28 00:52:38.729870 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-28 00:52:38.729881 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-28 00:52:38.729892 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.729969 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-28 00:52:38.729991 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-28 00:52:38.730008 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.730088 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-28 00:52:38.730100 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-28 00:52:38.730111 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.730133 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-28 00:52:38.730144 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-28 00:52:38.730155 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.730166 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-28 00:52:38.730176 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-28 00:52:38.730187 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.730198 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-28 00:52:38.730208 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-28 00:52:38.730219 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.730230 | orchestrator | 2026-03-28 00:52:38.730249 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-28 00:52:38.730273 | orchestrator | Saturday 28 March 2026 00:48:06 +0000 (0:00:02.297) 0:00:26.866 ******** 2026-03-28 00:52:38.730284 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.730295 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.730306 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.730317 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.730327 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.730338 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.730349 | orchestrator | 2026-03-28 00:52:38.730360 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-28 00:52:38.730371 | orchestrator | Saturday 28 March 2026 00:48:08 +0000 (0:00:01.625) 0:00:28.491 ******** 2026-03-28 00:52:38.730381 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.730392 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.730403 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.730414 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.730424 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.730435 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.730446 | orchestrator | 2026-03-28 00:52:38.730457 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-28 00:52:38.730467 | orchestrator | 2026-03-28 00:52:38.730478 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-28 00:52:38.730489 | orchestrator | Saturday 28 March 2026 00:48:10 +0000 (0:00:02.078) 0:00:30.570 ******** 2026-03-28 00:52:38.730500 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.730510 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.730521 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.730531 | orchestrator | 2026-03-28 00:52:38.730542 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-28 00:52:38.730553 | orchestrator | Saturday 28 March 2026 00:48:13 +0000 (0:00:02.732) 0:00:33.303 ******** 2026-03-28 00:52:38.730563 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.730574 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.730585 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.730595 | orchestrator | 2026-03-28 00:52:38.730606 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-28 00:52:38.730617 | orchestrator | Saturday 28 March 2026 00:48:15 +0000 (0:00:02.194) 0:00:35.497 ******** 2026-03-28 00:52:38.730628 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.730638 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.730649 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.730660 | orchestrator | 2026-03-28 00:52:38.730671 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-28 00:52:38.730681 | orchestrator | Saturday 28 March 2026 00:48:17 +0000 (0:00:01.508) 0:00:37.005 ******** 2026-03-28 00:52:38.730692 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.730703 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.730714 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.730731 | orchestrator | 2026-03-28 00:52:38.730750 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-28 00:52:38.730779 | orchestrator | Saturday 28 March 2026 00:48:17 +0000 (0:00:00.830) 0:00:37.836 ******** 2026-03-28 00:52:38.730799 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.730818 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.730839 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.730859 | orchestrator | 2026-03-28 00:52:38.730881 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-28 00:52:38.730960 | orchestrator | Saturday 28 March 2026 00:48:18 +0000 (0:00:00.407) 0:00:38.244 ******** 2026-03-28 00:52:38.730977 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.730988 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.730999 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.731010 | orchestrator | 2026-03-28 00:52:38.731021 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-28 00:52:38.731031 | orchestrator | Saturday 28 March 2026 00:48:20 +0000 (0:00:01.799) 0:00:40.043 ******** 2026-03-28 00:52:38.731041 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.731050 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.731060 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.731069 | orchestrator | 2026-03-28 00:52:38.731079 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-28 00:52:38.731088 | orchestrator | Saturday 28 March 2026 00:48:22 +0000 (0:00:01.979) 0:00:42.023 ******** 2026-03-28 00:52:38.731098 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:52:38.731108 | orchestrator | 2026-03-28 00:52:38.731118 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-28 00:52:38.731127 | orchestrator | Saturday 28 March 2026 00:48:22 +0000 (0:00:00.717) 0:00:42.740 ******** 2026-03-28 00:52:38.731137 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.731146 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.731156 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.731165 | orchestrator | 2026-03-28 00:52:38.731175 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-28 00:52:38.731184 | orchestrator | Saturday 28 March 2026 00:48:26 +0000 (0:00:03.638) 0:00:46.378 ******** 2026-03-28 00:52:38.731194 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.731204 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.731213 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.731223 | orchestrator | 2026-03-28 00:52:38.731232 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-28 00:52:38.731242 | orchestrator | Saturday 28 March 2026 00:48:27 +0000 (0:00:00.906) 0:00:47.284 ******** 2026-03-28 00:52:38.731251 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.731261 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.731270 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.731279 | orchestrator | 2026-03-28 00:52:38.731289 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-28 00:52:38.731298 | orchestrator | Saturday 28 March 2026 00:48:28 +0000 (0:00:00.887) 0:00:48.172 ******** 2026-03-28 00:52:38.731308 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.731318 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.731335 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.731345 | orchestrator | 2026-03-28 00:52:38.731354 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-28 00:52:38.731372 | orchestrator | Saturday 28 March 2026 00:48:30 +0000 (0:00:02.239) 0:00:50.412 ******** 2026-03-28 00:52:38.731382 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.731391 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.731401 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.731410 | orchestrator | 2026-03-28 00:52:38.731420 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-28 00:52:38.731430 | orchestrator | Saturday 28 March 2026 00:48:31 +0000 (0:00:00.887) 0:00:51.299 ******** 2026-03-28 00:52:38.731458 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.731467 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.731477 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.731486 | orchestrator | 2026-03-28 00:52:38.731496 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-28 00:52:38.731506 | orchestrator | Saturday 28 March 2026 00:48:32 +0000 (0:00:00.785) 0:00:52.084 ******** 2026-03-28 00:52:38.731515 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.731525 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.731534 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.731544 | orchestrator | 2026-03-28 00:52:38.731553 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-28 00:52:38.731563 | orchestrator | Saturday 28 March 2026 00:48:34 +0000 (0:00:02.802) 0:00:54.887 ******** 2026-03-28 00:52:38.731572 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.731582 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.731591 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.731601 | orchestrator | 2026-03-28 00:52:38.731610 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-28 00:52:38.731620 | orchestrator | Saturday 28 March 2026 00:48:37 +0000 (0:00:02.208) 0:00:57.096 ******** 2026-03-28 00:52:38.731629 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.731639 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.731648 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.731658 | orchestrator | 2026-03-28 00:52:38.731668 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-28 00:52:38.731677 | orchestrator | Saturday 28 March 2026 00:48:38 +0000 (0:00:01.085) 0:00:58.181 ******** 2026-03-28 00:52:38.731687 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 00:52:38.731698 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 00:52:38.731707 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 00:52:38.731717 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 00:52:38.731727 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 00:52:38.731736 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 00:52:38.731746 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-28 00:52:38.731755 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-28 00:52:38.731765 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-28 00:52:38.731774 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-28 00:52:38.731784 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-28 00:52:38.731793 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-28 00:52:38.731803 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.731812 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.731822 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.731831 | orchestrator | 2026-03-28 00:52:38.731847 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-28 00:52:38.731857 | orchestrator | Saturday 28 March 2026 00:49:21 +0000 (0:00:43.293) 0:01:41.475 ******** 2026-03-28 00:52:38.731870 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.731886 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.731926 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.731944 | orchestrator | 2026-03-28 00:52:38.731960 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-28 00:52:38.731975 | orchestrator | Saturday 28 March 2026 00:49:21 +0000 (0:00:00.402) 0:01:41.877 ******** 2026-03-28 00:52:38.731991 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.732007 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.732023 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.732039 | orchestrator | 2026-03-28 00:52:38.732056 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-28 00:52:38.732133 | orchestrator | Saturday 28 March 2026 00:49:22 +0000 (0:00:01.090) 0:01:42.968 ******** 2026-03-28 00:52:38.732147 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.732157 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.732166 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.732176 | orchestrator | 2026-03-28 00:52:38.732195 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-28 00:52:38.732205 | orchestrator | Saturday 28 March 2026 00:49:24 +0000 (0:00:01.862) 0:01:44.830 ******** 2026-03-28 00:52:38.732214 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.732224 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.732234 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.732243 | orchestrator | 2026-03-28 00:52:38.732253 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-28 00:52:38.732263 | orchestrator | Saturday 28 March 2026 00:49:50 +0000 (0:00:25.716) 0:02:10.546 ******** 2026-03-28 00:52:38.732273 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.732283 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.732292 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.732302 | orchestrator | 2026-03-28 00:52:38.732312 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-28 00:52:38.732322 | orchestrator | Saturday 28 March 2026 00:49:51 +0000 (0:00:01.155) 0:02:11.702 ******** 2026-03-28 00:52:38.732332 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.732341 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.732351 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.732361 | orchestrator | 2026-03-28 00:52:38.732370 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-28 00:52:38.732380 | orchestrator | Saturday 28 March 2026 00:49:52 +0000 (0:00:00.959) 0:02:12.661 ******** 2026-03-28 00:52:38.732390 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.732400 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.732409 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.732419 | orchestrator | 2026-03-28 00:52:38.732429 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-28 00:52:38.732439 | orchestrator | Saturday 28 March 2026 00:49:53 +0000 (0:00:00.838) 0:02:13.500 ******** 2026-03-28 00:52:38.732448 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.732458 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.732468 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.732478 | orchestrator | 2026-03-28 00:52:38.732487 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-28 00:52:38.732497 | orchestrator | Saturday 28 March 2026 00:49:54 +0000 (0:00:01.145) 0:02:14.645 ******** 2026-03-28 00:52:38.732506 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.732516 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.732525 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.732535 | orchestrator | 2026-03-28 00:52:38.732545 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-28 00:52:38.732554 | orchestrator | Saturday 28 March 2026 00:49:54 +0000 (0:00:00.313) 0:02:14.959 ******** 2026-03-28 00:52:38.732573 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.732584 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.732593 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.732603 | orchestrator | 2026-03-28 00:52:38.732612 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-28 00:52:38.732622 | orchestrator | Saturday 28 March 2026 00:49:55 +0000 (0:00:00.727) 0:02:15.686 ******** 2026-03-28 00:52:38.732632 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.732642 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.732651 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.732661 | orchestrator | 2026-03-28 00:52:38.732670 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-28 00:52:38.732680 | orchestrator | Saturday 28 March 2026 00:49:56 +0000 (0:00:00.791) 0:02:16.477 ******** 2026-03-28 00:52:38.732689 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.732699 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.732709 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.732719 | orchestrator | 2026-03-28 00:52:38.732729 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-28 00:52:38.732738 | orchestrator | Saturday 28 March 2026 00:49:58 +0000 (0:00:01.723) 0:02:18.201 ******** 2026-03-28 00:52:38.732748 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:38.732757 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:38.732768 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:38.732777 | orchestrator | 2026-03-28 00:52:38.732787 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-28 00:52:38.732797 | orchestrator | Saturday 28 March 2026 00:49:59 +0000 (0:00:01.199) 0:02:19.401 ******** 2026-03-28 00:52:38.732807 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.732816 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.732826 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.732835 | orchestrator | 2026-03-28 00:52:38.732845 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-28 00:52:38.732855 | orchestrator | Saturday 28 March 2026 00:49:59 +0000 (0:00:00.349) 0:02:19.751 ******** 2026-03-28 00:52:38.732872 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.732888 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.732996 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.733017 | orchestrator | 2026-03-28 00:52:38.733034 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-28 00:52:38.733046 | orchestrator | Saturday 28 March 2026 00:50:00 +0000 (0:00:00.398) 0:02:20.149 ******** 2026-03-28 00:52:38.733056 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.733066 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.733076 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.733085 | orchestrator | 2026-03-28 00:52:38.733095 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-28 00:52:38.733105 | orchestrator | Saturday 28 March 2026 00:50:01 +0000 (0:00:01.334) 0:02:21.484 ******** 2026-03-28 00:52:38.733114 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.733124 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.733134 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.733143 | orchestrator | 2026-03-28 00:52:38.733153 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-28 00:52:38.733163 | orchestrator | Saturday 28 March 2026 00:50:02 +0000 (0:00:00.837) 0:02:22.321 ******** 2026-03-28 00:52:38.733173 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 00:52:38.733191 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 00:52:38.733201 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 00:52:38.733221 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 00:52:38.733231 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 00:52:38.733240 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 00:52:38.733250 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 00:52:38.733260 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 00:52:38.733269 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 00:52:38.733279 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-28 00:52:38.733288 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 00:52:38.733298 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 00:52:38.733307 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-28 00:52:38.733317 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 00:52:38.733327 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 00:52:38.733336 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 00:52:38.733346 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 00:52:38.733356 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 00:52:38.733365 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 00:52:38.733375 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 00:52:38.733385 | orchestrator | 2026-03-28 00:52:38.733394 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-28 00:52:38.733404 | orchestrator | 2026-03-28 00:52:38.733415 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-28 00:52:38.733424 | orchestrator | Saturday 28 March 2026 00:50:05 +0000 (0:00:03.261) 0:02:25.582 ******** 2026-03-28 00:52:38.733434 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:38.733444 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:38.733453 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:38.733463 | orchestrator | 2026-03-28 00:52:38.733473 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-28 00:52:38.733483 | orchestrator | Saturday 28 March 2026 00:50:06 +0000 (0:00:00.615) 0:02:26.198 ******** 2026-03-28 00:52:38.733493 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:38.733502 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:38.733512 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:38.733521 | orchestrator | 2026-03-28 00:52:38.733531 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-28 00:52:38.733540 | orchestrator | Saturday 28 March 2026 00:50:06 +0000 (0:00:00.681) 0:02:26.880 ******** 2026-03-28 00:52:38.733550 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:38.733560 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:38.733569 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:38.733579 | orchestrator | 2026-03-28 00:52:38.733588 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-28 00:52:38.733598 | orchestrator | Saturday 28 March 2026 00:50:07 +0000 (0:00:00.389) 0:02:27.269 ******** 2026-03-28 00:52:38.733608 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:52:38.733618 | orchestrator | 2026-03-28 00:52:38.733628 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-28 00:52:38.733644 | orchestrator | Saturday 28 March 2026 00:50:08 +0000 (0:00:00.810) 0:02:28.079 ******** 2026-03-28 00:52:38.733654 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.733663 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.733673 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.733683 | orchestrator | 2026-03-28 00:52:38.733692 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-28 00:52:38.733702 | orchestrator | Saturday 28 March 2026 00:50:08 +0000 (0:00:00.356) 0:02:28.436 ******** 2026-03-28 00:52:38.733712 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.733721 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.733731 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.733740 | orchestrator | 2026-03-28 00:52:38.733750 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-28 00:52:38.734570 | orchestrator | Saturday 28 March 2026 00:50:08 +0000 (0:00:00.409) 0:02:28.845 ******** 2026-03-28 00:52:38.734631 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.734648 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.734665 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.734682 | orchestrator | 2026-03-28 00:52:38.734698 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-28 00:52:38.734716 | orchestrator | Saturday 28 March 2026 00:50:09 +0000 (0:00:00.355) 0:02:29.201 ******** 2026-03-28 00:52:38.734732 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:38.734750 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:38.734767 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:38.734784 | orchestrator | 2026-03-28 00:52:38.734817 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-28 00:52:38.734843 | orchestrator | Saturday 28 March 2026 00:50:10 +0000 (0:00:00.932) 0:02:30.133 ******** 2026-03-28 00:52:38.734859 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:38.734875 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:38.734891 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:38.734982 | orchestrator | 2026-03-28 00:52:38.734999 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-28 00:52:38.735014 | orchestrator | Saturday 28 March 2026 00:50:11 +0000 (0:00:01.220) 0:02:31.354 ******** 2026-03-28 00:52:38.735027 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:38.735041 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:38.735054 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:38.735069 | orchestrator | 2026-03-28 00:52:38.735083 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-28 00:52:38.735098 | orchestrator | Saturday 28 March 2026 00:50:12 +0000 (0:00:01.352) 0:02:32.706 ******** 2026-03-28 00:52:38.735114 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:38.735130 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:38.735144 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:38.735158 | orchestrator | 2026-03-28 00:52:38.735174 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-28 00:52:38.735189 | orchestrator | 2026-03-28 00:52:38.735204 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-28 00:52:38.735219 | orchestrator | Saturday 28 March 2026 00:50:23 +0000 (0:00:10.363) 0:02:43.069 ******** 2026-03-28 00:52:38.735234 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:38.735247 | orchestrator | 2026-03-28 00:52:38.735258 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-28 00:52:38.735272 | orchestrator | Saturday 28 March 2026 00:50:24 +0000 (0:00:00.909) 0:02:43.979 ******** 2026-03-28 00:52:38.735285 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:38.735298 | orchestrator | 2026-03-28 00:52:38.735310 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-28 00:52:38.735322 | orchestrator | Saturday 28 March 2026 00:50:24 +0000 (0:00:00.806) 0:02:44.785 ******** 2026-03-28 00:52:38.735350 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-28 00:52:38.735363 | orchestrator | 2026-03-28 00:52:38.735376 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-28 00:52:38.735388 | orchestrator | Saturday 28 March 2026 00:50:25 +0000 (0:00:00.602) 0:02:45.388 ******** 2026-03-28 00:52:38.735402 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:38.735414 | orchestrator | 2026-03-28 00:52:38.735427 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-28 00:52:38.735439 | orchestrator | Saturday 28 March 2026 00:50:26 +0000 (0:00:00.934) 0:02:46.322 ******** 2026-03-28 00:52:38.735453 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:38.735462 | orchestrator | 2026-03-28 00:52:38.735470 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-28 00:52:38.735478 | orchestrator | Saturday 28 March 2026 00:50:26 +0000 (0:00:00.578) 0:02:46.901 ******** 2026-03-28 00:52:38.735486 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 00:52:38.735494 | orchestrator | 2026-03-28 00:52:38.735502 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-28 00:52:38.735510 | orchestrator | Saturday 28 March 2026 00:50:28 +0000 (0:00:01.665) 0:02:48.566 ******** 2026-03-28 00:52:38.735518 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 00:52:38.735526 | orchestrator | 2026-03-28 00:52:38.735534 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-28 00:52:38.735542 | orchestrator | Saturday 28 March 2026 00:50:29 +0000 (0:00:00.850) 0:02:49.417 ******** 2026-03-28 00:52:38.735550 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:38.735558 | orchestrator | 2026-03-28 00:52:38.735566 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-28 00:52:38.735574 | orchestrator | Saturday 28 March 2026 00:50:30 +0000 (0:00:00.650) 0:02:50.067 ******** 2026-03-28 00:52:38.735581 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:38.735589 | orchestrator | 2026-03-28 00:52:38.735597 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-28 00:52:38.735605 | orchestrator | 2026-03-28 00:52:38.735613 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-28 00:52:38.735621 | orchestrator | Saturday 28 March 2026 00:50:30 +0000 (0:00:00.522) 0:02:50.590 ******** 2026-03-28 00:52:38.735628 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:38.735636 | orchestrator | 2026-03-28 00:52:38.735644 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-28 00:52:38.735652 | orchestrator | Saturday 28 March 2026 00:50:30 +0000 (0:00:00.170) 0:02:50.761 ******** 2026-03-28 00:52:38.735660 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:52:38.735668 | orchestrator | 2026-03-28 00:52:38.735676 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-28 00:52:38.735684 | orchestrator | Saturday 28 March 2026 00:50:31 +0000 (0:00:00.226) 0:02:50.987 ******** 2026-03-28 00:52:38.735692 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:38.735700 | orchestrator | 2026-03-28 00:52:38.735708 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-28 00:52:38.735716 | orchestrator | Saturday 28 March 2026 00:50:32 +0000 (0:00:00.994) 0:02:51.982 ******** 2026-03-28 00:52:38.735723 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:38.735731 | orchestrator | 2026-03-28 00:52:38.735739 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-28 00:52:38.735747 | orchestrator | Saturday 28 March 2026 00:50:34 +0000 (0:00:02.195) 0:02:54.178 ******** 2026-03-28 00:52:38.735755 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:38.735763 | orchestrator | 2026-03-28 00:52:38.735771 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-28 00:52:38.735779 | orchestrator | Saturday 28 March 2026 00:50:35 +0000 (0:00:01.028) 0:02:55.207 ******** 2026-03-28 00:52:38.735787 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:38.735802 | orchestrator | 2026-03-28 00:52:38.735821 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-28 00:52:38.735836 | orchestrator | Saturday 28 March 2026 00:50:35 +0000 (0:00:00.603) 0:02:55.810 ******** 2026-03-28 00:52:38.735844 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:38.735852 | orchestrator | 2026-03-28 00:52:38.735860 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-28 00:52:38.735868 | orchestrator | Saturday 28 March 2026 00:50:46 +0000 (0:00:10.362) 0:03:06.172 ******** 2026-03-28 00:52:38.735876 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:38.735883 | orchestrator | 2026-03-28 00:52:38.735891 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-28 00:52:38.735899 | orchestrator | Saturday 28 March 2026 00:51:06 +0000 (0:00:19.951) 0:03:26.124 ******** 2026-03-28 00:52:38.735933 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:38.735941 | orchestrator | 2026-03-28 00:52:38.735949 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-28 00:52:38.735957 | orchestrator | 2026-03-28 00:52:38.735967 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-28 00:52:38.735980 | orchestrator | Saturday 28 March 2026 00:51:06 +0000 (0:00:00.670) 0:03:26.794 ******** 2026-03-28 00:52:38.735993 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.736005 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.736018 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.736031 | orchestrator | 2026-03-28 00:52:38.736044 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-28 00:52:38.736056 | orchestrator | Saturday 28 March 2026 00:51:07 +0000 (0:00:00.421) 0:03:27.216 ******** 2026-03-28 00:52:38.736068 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.736082 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.736096 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.736109 | orchestrator | 2026-03-28 00:52:38.736123 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-28 00:52:38.736132 | orchestrator | Saturday 28 March 2026 00:51:07 +0000 (0:00:00.341) 0:03:27.557 ******** 2026-03-28 00:52:38.736139 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:52:38.736147 | orchestrator | 2026-03-28 00:52:38.736155 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-28 00:52:38.736163 | orchestrator | Saturday 28 March 2026 00:51:08 +0000 (0:00:00.776) 0:03:28.333 ******** 2026-03-28 00:52:38.736171 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:52:38.736178 | orchestrator | 2026-03-28 00:52:38.736186 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-28 00:52:38.736194 | orchestrator | Saturday 28 March 2026 00:51:09 +0000 (0:00:00.957) 0:03:29.290 ******** 2026-03-28 00:52:38.736202 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:52:38.736210 | orchestrator | 2026-03-28 00:52:38.736218 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-28 00:52:38.736226 | orchestrator | Saturday 28 March 2026 00:51:10 +0000 (0:00:00.899) 0:03:30.190 ******** 2026-03-28 00:52:38.736234 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.736242 | orchestrator | 2026-03-28 00:52:38.736250 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-28 00:52:38.736258 | orchestrator | Saturday 28 March 2026 00:51:10 +0000 (0:00:00.177) 0:03:30.368 ******** 2026-03-28 00:52:38.736266 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:52:38.736274 | orchestrator | 2026-03-28 00:52:38.736281 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-28 00:52:38.736289 | orchestrator | Saturday 28 March 2026 00:51:12 +0000 (0:00:01.625) 0:03:31.993 ******** 2026-03-28 00:52:38.736298 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.736305 | orchestrator | 2026-03-28 00:52:38.736313 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-28 00:52:38.736329 | orchestrator | Saturday 28 March 2026 00:51:12 +0000 (0:00:00.211) 0:03:32.205 ******** 2026-03-28 00:52:38.736337 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.736345 | orchestrator | 2026-03-28 00:52:38.736353 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-28 00:52:38.736361 | orchestrator | Saturday 28 March 2026 00:51:12 +0000 (0:00:00.228) 0:03:32.433 ******** 2026-03-28 00:52:38.736369 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.736376 | orchestrator | 2026-03-28 00:52:38.736384 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-28 00:52:38.736392 | orchestrator | Saturday 28 March 2026 00:51:12 +0000 (0:00:00.159) 0:03:32.593 ******** 2026-03-28 00:52:38.736400 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.736408 | orchestrator | 2026-03-28 00:52:38.736415 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-28 00:52:38.736423 | orchestrator | Saturday 28 March 2026 00:51:12 +0000 (0:00:00.152) 0:03:32.745 ******** 2026-03-28 00:52:38.736431 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:52:38.736439 | orchestrator | 2026-03-28 00:52:38.736447 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-28 00:52:38.736455 | orchestrator | Saturday 28 March 2026 00:51:17 +0000 (0:00:05.073) 0:03:37.818 ******** 2026-03-28 00:52:38.736463 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-28 00:52:38.736471 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-28 00:52:38.736480 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-28 00:52:38.736488 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-28 00:52:38.736495 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-28 00:52:38.736503 | orchestrator | 2026-03-28 00:52:38.736511 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-28 00:52:38.736519 | orchestrator | Saturday 28 March 2026 00:52:00 +0000 (0:00:42.765) 0:04:20.584 ******** 2026-03-28 00:52:38.736533 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:52:38.736541 | orchestrator | 2026-03-28 00:52:38.736554 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-28 00:52:38.736562 | orchestrator | Saturday 28 March 2026 00:52:01 +0000 (0:00:01.291) 0:04:21.875 ******** 2026-03-28 00:52:38.736570 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:52:38.736578 | orchestrator | 2026-03-28 00:52:38.736586 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-28 00:52:38.736594 | orchestrator | Saturday 28 March 2026 00:52:04 +0000 (0:00:02.286) 0:04:24.162 ******** 2026-03-28 00:52:38.736602 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:52:38.736610 | orchestrator | 2026-03-28 00:52:38.736618 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-28 00:52:38.736626 | orchestrator | Saturday 28 March 2026 00:52:05 +0000 (0:00:01.221) 0:04:25.384 ******** 2026-03-28 00:52:38.736634 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.736642 | orchestrator | 2026-03-28 00:52:38.736650 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-28 00:52:38.736658 | orchestrator | Saturday 28 March 2026 00:52:05 +0000 (0:00:00.145) 0:04:25.529 ******** 2026-03-28 00:52:38.736666 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-28 00:52:38.736674 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-28 00:52:38.736682 | orchestrator | 2026-03-28 00:52:38.736690 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-28 00:52:38.736698 | orchestrator | Saturday 28 March 2026 00:52:07 +0000 (0:00:02.087) 0:04:27.616 ******** 2026-03-28 00:52:38.736706 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.736725 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.736733 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.736741 | orchestrator | 2026-03-28 00:52:38.736749 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-28 00:52:38.736757 | orchestrator | Saturday 28 March 2026 00:52:08 +0000 (0:00:00.478) 0:04:28.095 ******** 2026-03-28 00:52:38.736765 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.736773 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.736780 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.736788 | orchestrator | 2026-03-28 00:52:38.736797 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-28 00:52:38.736804 | orchestrator | 2026-03-28 00:52:38.736812 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-28 00:52:38.736820 | orchestrator | Saturday 28 March 2026 00:52:09 +0000 (0:00:01.484) 0:04:29.580 ******** 2026-03-28 00:52:38.736828 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:38.736836 | orchestrator | 2026-03-28 00:52:38.736844 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-28 00:52:38.736852 | orchestrator | Saturday 28 March 2026 00:52:09 +0000 (0:00:00.151) 0:04:29.731 ******** 2026-03-28 00:52:38.736860 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:52:38.736868 | orchestrator | 2026-03-28 00:52:38.736876 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-28 00:52:38.736884 | orchestrator | Saturday 28 March 2026 00:52:09 +0000 (0:00:00.236) 0:04:29.968 ******** 2026-03-28 00:52:38.736892 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:38.736918 | orchestrator | 2026-03-28 00:52:38.736933 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-28 00:52:38.736945 | orchestrator | 2026-03-28 00:52:38.736953 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-28 00:52:38.736961 | orchestrator | Saturday 28 March 2026 00:52:15 +0000 (0:00:05.617) 0:04:35.585 ******** 2026-03-28 00:52:38.736969 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:38.736977 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:38.736985 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:38.736993 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:38.737001 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:38.737008 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:38.737016 | orchestrator | 2026-03-28 00:52:38.737024 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-28 00:52:38.737032 | orchestrator | Saturday 28 March 2026 00:52:17 +0000 (0:00:01.690) 0:04:37.275 ******** 2026-03-28 00:52:38.737041 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 00:52:38.737048 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 00:52:38.737056 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 00:52:38.737064 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 00:52:38.737072 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 00:52:38.737080 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 00:52:38.737088 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 00:52:38.737095 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 00:52:38.737103 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 00:52:38.737111 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 00:52:38.737119 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 00:52:38.737130 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 00:52:38.737160 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 00:52:38.737174 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 00:52:38.737193 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 00:52:38.737206 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 00:52:38.737218 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 00:52:38.737231 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 00:52:38.737243 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 00:52:38.737256 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 00:52:38.737268 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 00:52:38.737281 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 00:52:38.737294 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 00:52:38.737306 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 00:52:38.737318 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 00:52:38.737330 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 00:52:38.737343 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 00:52:38.737355 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 00:52:38.737368 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 00:52:38.737381 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 00:52:38.737393 | orchestrator | 2026-03-28 00:52:38.737406 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-28 00:52:38.737418 | orchestrator | Saturday 28 March 2026 00:52:34 +0000 (0:00:17.181) 0:04:54.457 ******** 2026-03-28 00:52:38.737431 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.737444 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.737457 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.737470 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.737482 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.737495 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.737508 | orchestrator | 2026-03-28 00:52:38.737521 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-28 00:52:38.737534 | orchestrator | Saturday 28 March 2026 00:52:35 +0000 (0:00:00.679) 0:04:55.136 ******** 2026-03-28 00:52:38.737548 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:38.737559 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:38.737567 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:38.737575 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:38.737582 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:38.737590 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:38.737598 | orchestrator | 2026-03-28 00:52:38.737606 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:52:38.737614 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:52:38.737624 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-28 00:52:38.737633 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 00:52:38.737652 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 00:52:38.737660 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 00:52:38.737668 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 00:52:38.737676 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 00:52:38.737684 | orchestrator | 2026-03-28 00:52:38.737691 | orchestrator | 2026-03-28 00:52:38.737699 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:52:38.737707 | orchestrator | Saturday 28 March 2026 00:52:35 +0000 (0:00:00.523) 0:04:55.659 ******** 2026-03-28 00:52:38.737715 | orchestrator | =============================================================================== 2026-03-28 00:52:38.737723 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.29s 2026-03-28 00:52:38.737731 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.77s 2026-03-28 00:52:38.737739 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.72s 2026-03-28 00:52:38.737755 | orchestrator | kubectl : Install required packages ------------------------------------ 19.95s 2026-03-28 00:52:38.737768 | orchestrator | Manage labels ---------------------------------------------------------- 17.18s 2026-03-28 00:52:38.737776 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.36s 2026-03-28 00:52:38.737784 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 10.36s 2026-03-28 00:52:38.737792 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.62s 2026-03-28 00:52:38.737800 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.07s 2026-03-28 00:52:38.737807 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 4.84s 2026-03-28 00:52:38.737815 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.77s 2026-03-28 00:52:38.737823 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.64s 2026-03-28 00:52:38.737831 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.26s 2026-03-28 00:52:38.737839 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.80s 2026-03-28 00:52:38.737847 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.73s 2026-03-28 00:52:38.737855 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.30s 2026-03-28 00:52:38.737863 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.29s 2026-03-28 00:52:38.737871 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.24s 2026-03-28 00:52:38.737879 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.24s 2026-03-28 00:52:38.737886 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.21s 2026-03-28 00:52:38.737894 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:38.738105 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:38.738137 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task 78a40359-0a20-4254-9e0b-f63e1fc2ad1d is in state STARTED 2026-03-28 00:52:38.738146 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task 1053c510-7e39-41c4-8c1a-7aa262b47915 is in state STARTED 2026-03-28 00:52:38.738164 | orchestrator | 2026-03-28 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:41.813173 | orchestrator | 2026-03-28 00:52:41 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:41.813492 | orchestrator | 2026-03-28 00:52:41 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:41.816255 | orchestrator | 2026-03-28 00:52:41 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:41.816579 | orchestrator | 2026-03-28 00:52:41 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:41.818560 | orchestrator | 2026-03-28 00:52:41 | INFO  | Task 78a40359-0a20-4254-9e0b-f63e1fc2ad1d is in state STARTED 2026-03-28 00:52:41.820382 | orchestrator | 2026-03-28 00:52:41 | INFO  | Task 1053c510-7e39-41c4-8c1a-7aa262b47915 is in state STARTED 2026-03-28 00:52:41.820430 | orchestrator | 2026-03-28 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:44.861360 | orchestrator | 2026-03-28 00:52:44 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:44.866232 | orchestrator | 2026-03-28 00:52:44 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:44.867555 | orchestrator | 2026-03-28 00:52:44 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:44.869266 | orchestrator | 2026-03-28 00:52:44 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:44.870368 | orchestrator | 2026-03-28 00:52:44 | INFO  | Task 78a40359-0a20-4254-9e0b-f63e1fc2ad1d is in state STARTED 2026-03-28 00:52:44.871658 | orchestrator | 2026-03-28 00:52:44 | INFO  | Task 1053c510-7e39-41c4-8c1a-7aa262b47915 is in state STARTED 2026-03-28 00:52:44.871753 | orchestrator | 2026-03-28 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:47.916731 | orchestrator | 2026-03-28 00:52:47 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:47.916963 | orchestrator | 2026-03-28 00:52:47 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:47.917096 | orchestrator | 2026-03-28 00:52:47 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:47.917857 | orchestrator | 2026-03-28 00:52:47 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:47.918553 | orchestrator | 2026-03-28 00:52:47 | INFO  | Task 78a40359-0a20-4254-9e0b-f63e1fc2ad1d is in state STARTED 2026-03-28 00:52:47.919187 | orchestrator | 2026-03-28 00:52:47 | INFO  | Task 1053c510-7e39-41c4-8c1a-7aa262b47915 is in state SUCCESS 2026-03-28 00:52:47.919305 | orchestrator | 2026-03-28 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:50.962415 | orchestrator | 2026-03-28 00:52:50 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:50.965489 | orchestrator | 2026-03-28 00:52:50 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:50.967008 | orchestrator | 2026-03-28 00:52:50 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:50.969261 | orchestrator | 2026-03-28 00:52:50 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:50.971615 | orchestrator | 2026-03-28 00:52:50 | INFO  | Task 78a40359-0a20-4254-9e0b-f63e1fc2ad1d is in state STARTED 2026-03-28 00:52:50.971984 | orchestrator | 2026-03-28 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:54.040313 | orchestrator | 2026-03-28 00:52:54 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:54.040594 | orchestrator | 2026-03-28 00:52:54 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:54.041511 | orchestrator | 2026-03-28 00:52:54 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:54.042340 | orchestrator | 2026-03-28 00:52:54 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:54.042954 | orchestrator | 2026-03-28 00:52:54 | INFO  | Task 78a40359-0a20-4254-9e0b-f63e1fc2ad1d is in state SUCCESS 2026-03-28 00:52:54.043461 | orchestrator | 2026-03-28 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:57.093644 | orchestrator | 2026-03-28 00:52:57 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:52:57.095958 | orchestrator | 2026-03-28 00:52:57 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:52:57.098120 | orchestrator | 2026-03-28 00:52:57 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:52:57.102126 | orchestrator | 2026-03-28 00:52:57 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:52:57.102250 | orchestrator | 2026-03-28 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:00.146115 | orchestrator | 2026-03-28 00:53:00 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:00.147532 | orchestrator | 2026-03-28 00:53:00 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:00.148830 | orchestrator | 2026-03-28 00:53:00 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:00.153054 | orchestrator | 2026-03-28 00:53:00 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:00.153130 | orchestrator | 2026-03-28 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:03.195750 | orchestrator | 2026-03-28 00:53:03 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:03.197986 | orchestrator | 2026-03-28 00:53:03 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:03.200280 | orchestrator | 2026-03-28 00:53:03 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:03.202076 | orchestrator | 2026-03-28 00:53:03 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:03.202109 | orchestrator | 2026-03-28 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:06.249011 | orchestrator | 2026-03-28 00:53:06 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:06.251125 | orchestrator | 2026-03-28 00:53:06 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:06.252386 | orchestrator | 2026-03-28 00:53:06 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:06.255343 | orchestrator | 2026-03-28 00:53:06 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:06.255559 | orchestrator | 2026-03-28 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:09.298246 | orchestrator | 2026-03-28 00:53:09 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:09.299072 | orchestrator | 2026-03-28 00:53:09 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:09.300396 | orchestrator | 2026-03-28 00:53:09 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:09.301915 | orchestrator | 2026-03-28 00:53:09 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:09.301970 | orchestrator | 2026-03-28 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:12.354335 | orchestrator | 2026-03-28 00:53:12 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:12.354545 | orchestrator | 2026-03-28 00:53:12 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:12.357608 | orchestrator | 2026-03-28 00:53:12 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:12.358769 | orchestrator | 2026-03-28 00:53:12 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:12.358810 | orchestrator | 2026-03-28 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:15.402594 | orchestrator | 2026-03-28 00:53:15 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:15.404166 | orchestrator | 2026-03-28 00:53:15 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:15.405445 | orchestrator | 2026-03-28 00:53:15 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:15.406926 | orchestrator | 2026-03-28 00:53:15 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:15.406957 | orchestrator | 2026-03-28 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:18.448016 | orchestrator | 2026-03-28 00:53:18 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:18.448126 | orchestrator | 2026-03-28 00:53:18 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:18.448392 | orchestrator | 2026-03-28 00:53:18 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:18.449008 | orchestrator | 2026-03-28 00:53:18 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:18.449033 | orchestrator | 2026-03-28 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:21.489233 | orchestrator | 2026-03-28 00:53:21 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:21.490417 | orchestrator | 2026-03-28 00:53:21 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:21.493569 | orchestrator | 2026-03-28 00:53:21 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:21.494646 | orchestrator | 2026-03-28 00:53:21 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:21.494713 | orchestrator | 2026-03-28 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:24.538593 | orchestrator | 2026-03-28 00:53:24 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:24.542404 | orchestrator | 2026-03-28 00:53:24 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:24.543594 | orchestrator | 2026-03-28 00:53:24 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:24.544702 | orchestrator | 2026-03-28 00:53:24 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:24.544772 | orchestrator | 2026-03-28 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:27.576158 | orchestrator | 2026-03-28 00:53:27 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:27.576392 | orchestrator | 2026-03-28 00:53:27 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:27.579026 | orchestrator | 2026-03-28 00:53:27 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:27.579070 | orchestrator | 2026-03-28 00:53:27 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:27.579079 | orchestrator | 2026-03-28 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:30.627526 | orchestrator | 2026-03-28 00:53:30 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:30.629002 | orchestrator | 2026-03-28 00:53:30 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:30.630485 | orchestrator | 2026-03-28 00:53:30 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:30.632265 | orchestrator | 2026-03-28 00:53:30 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:30.632348 | orchestrator | 2026-03-28 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:33.681160 | orchestrator | 2026-03-28 00:53:33 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:33.683624 | orchestrator | 2026-03-28 00:53:33 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:33.685862 | orchestrator | 2026-03-28 00:53:33 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:33.689088 | orchestrator | 2026-03-28 00:53:33 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:33.689132 | orchestrator | 2026-03-28 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:36.717449 | orchestrator | 2026-03-28 00:53:36 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:36.717540 | orchestrator | 2026-03-28 00:53:36 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:36.718601 | orchestrator | 2026-03-28 00:53:36 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:36.719949 | orchestrator | 2026-03-28 00:53:36 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:36.719983 | orchestrator | 2026-03-28 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:39.755877 | orchestrator | 2026-03-28 00:53:39 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:39.757585 | orchestrator | 2026-03-28 00:53:39 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state STARTED 2026-03-28 00:53:39.759609 | orchestrator | 2026-03-28 00:53:39 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:39.761428 | orchestrator | 2026-03-28 00:53:39 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:39.761616 | orchestrator | 2026-03-28 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:42.802447 | orchestrator | 2026-03-28 00:53:42 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:42.802727 | orchestrator | 2026-03-28 00:53:42 | INFO  | Task a70d51f6-7467-4251-99a6-c04f8e66d2ab is in state SUCCESS 2026-03-28 00:53:42.803538 | orchestrator | 2026-03-28 00:53:42.803569 | orchestrator | 2026-03-28 00:53:42.803583 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-28 00:53:42.803597 | orchestrator | 2026-03-28 00:53:42.803610 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-28 00:53:42.803650 | orchestrator | Saturday 28 March 2026 00:52:42 +0000 (0:00:00.296) 0:00:00.296 ******** 2026-03-28 00:53:42.803662 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-28 00:53:42.803673 | orchestrator | 2026-03-28 00:53:42.803685 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-28 00:53:42.803696 | orchestrator | Saturday 28 March 2026 00:52:43 +0000 (0:00:00.977) 0:00:01.273 ******** 2026-03-28 00:53:42.803706 | orchestrator | changed: [testbed-manager] 2026-03-28 00:53:42.803717 | orchestrator | 2026-03-28 00:53:42.803729 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-28 00:53:42.803740 | orchestrator | Saturday 28 March 2026 00:52:45 +0000 (0:00:01.949) 0:00:03.223 ******** 2026-03-28 00:53:42.803788 | orchestrator | changed: [testbed-manager] 2026-03-28 00:53:42.803870 | orchestrator | 2026-03-28 00:53:42.803883 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:53:42.803894 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:53:42.803907 | orchestrator | 2026-03-28 00:53:42.803918 | orchestrator | 2026-03-28 00:53:42.803928 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:53:42.803939 | orchestrator | Saturday 28 March 2026 00:52:45 +0000 (0:00:00.645) 0:00:03.868 ******** 2026-03-28 00:53:42.803950 | orchestrator | =============================================================================== 2026-03-28 00:53:42.803984 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.95s 2026-03-28 00:53:42.803995 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.98s 2026-03-28 00:53:42.804006 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.65s 2026-03-28 00:53:42.804017 | orchestrator | 2026-03-28 00:53:42.804028 | orchestrator | 2026-03-28 00:53:42.804039 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-28 00:53:42.804050 | orchestrator | 2026-03-28 00:53:42.804060 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-28 00:53:42.804071 | orchestrator | Saturday 28 March 2026 00:52:41 +0000 (0:00:00.168) 0:00:00.168 ******** 2026-03-28 00:53:42.804082 | orchestrator | ok: [testbed-manager] 2026-03-28 00:53:42.804094 | orchestrator | 2026-03-28 00:53:42.804117 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-28 00:53:42.804144 | orchestrator | Saturday 28 March 2026 00:52:42 +0000 (0:00:00.750) 0:00:00.919 ******** 2026-03-28 00:53:42.804155 | orchestrator | ok: [testbed-manager] 2026-03-28 00:53:42.804166 | orchestrator | 2026-03-28 00:53:42.804177 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-28 00:53:42.804188 | orchestrator | Saturday 28 March 2026 00:52:43 +0000 (0:00:00.719) 0:00:01.638 ******** 2026-03-28 00:53:42.804199 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-28 00:53:42.804210 | orchestrator | 2026-03-28 00:53:42.804220 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-28 00:53:42.804231 | orchestrator | Saturday 28 March 2026 00:52:44 +0000 (0:00:00.946) 0:00:02.585 ******** 2026-03-28 00:53:42.804301 | orchestrator | changed: [testbed-manager] 2026-03-28 00:53:42.804313 | orchestrator | 2026-03-28 00:53:42.804324 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-28 00:53:42.804334 | orchestrator | Saturday 28 March 2026 00:52:45 +0000 (0:00:01.961) 0:00:04.546 ******** 2026-03-28 00:53:42.804345 | orchestrator | changed: [testbed-manager] 2026-03-28 00:53:42.804356 | orchestrator | 2026-03-28 00:53:42.804367 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-28 00:53:42.804378 | orchestrator | Saturday 28 March 2026 00:52:46 +0000 (0:00:00.689) 0:00:05.236 ******** 2026-03-28 00:53:42.804389 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 00:53:42.804399 | orchestrator | 2026-03-28 00:53:42.804410 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-28 00:53:42.804432 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:02.404) 0:00:07.640 ******** 2026-03-28 00:53:42.804470 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 00:53:42.804481 | orchestrator | 2026-03-28 00:53:42.804492 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-28 00:53:42.804503 | orchestrator | Saturday 28 March 2026 00:52:50 +0000 (0:00:00.936) 0:00:08.577 ******** 2026-03-28 00:53:42.804514 | orchestrator | ok: [testbed-manager] 2026-03-28 00:53:42.804524 | orchestrator | 2026-03-28 00:53:42.804536 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-28 00:53:42.804546 | orchestrator | Saturday 28 March 2026 00:52:50 +0000 (0:00:00.497) 0:00:09.074 ******** 2026-03-28 00:53:42.804557 | orchestrator | ok: [testbed-manager] 2026-03-28 00:53:42.804568 | orchestrator | 2026-03-28 00:53:42.804579 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:53:42.804589 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:53:42.804600 | orchestrator | 2026-03-28 00:53:42.804611 | orchestrator | 2026-03-28 00:53:42.804622 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:53:42.804633 | orchestrator | Saturday 28 March 2026 00:52:50 +0000 (0:00:00.352) 0:00:09.427 ******** 2026-03-28 00:53:42.804643 | orchestrator | =============================================================================== 2026-03-28 00:53:42.804654 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.40s 2026-03-28 00:53:42.804665 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.96s 2026-03-28 00:53:42.804676 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.95s 2026-03-28 00:53:42.804701 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.94s 2026-03-28 00:53:42.804712 | orchestrator | Get home directory of operator user ------------------------------------- 0.75s 2026-03-28 00:53:42.804723 | orchestrator | Create .kube directory -------------------------------------------------- 0.72s 2026-03-28 00:53:42.804734 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.69s 2026-03-28 00:53:42.804744 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.50s 2026-03-28 00:53:42.804755 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.35s 2026-03-28 00:53:42.804766 | orchestrator | 2026-03-28 00:53:42.805026 | orchestrator | 2026-03-28 00:53:42.805046 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-28 00:53:42.805057 | orchestrator | 2026-03-28 00:53:42.805068 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-28 00:53:42.805079 | orchestrator | Saturday 28 March 2026 00:51:15 +0000 (0:00:00.426) 0:00:00.426 ******** 2026-03-28 00:53:42.805089 | orchestrator | ok: [localhost] => { 2026-03-28 00:53:42.805101 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-28 00:53:42.805113 | orchestrator | } 2026-03-28 00:53:42.805124 | orchestrator | 2026-03-28 00:53:42.805135 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-28 00:53:42.805146 | orchestrator | Saturday 28 March 2026 00:51:15 +0000 (0:00:00.077) 0:00:00.504 ******** 2026-03-28 00:53:42.805157 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-28 00:53:42.805169 | orchestrator | ...ignoring 2026-03-28 00:53:42.805181 | orchestrator | 2026-03-28 00:53:42.805191 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-28 00:53:42.805202 | orchestrator | Saturday 28 March 2026 00:51:19 +0000 (0:00:04.167) 0:00:04.671 ******** 2026-03-28 00:53:42.805213 | orchestrator | skipping: [localhost] 2026-03-28 00:53:42.805224 | orchestrator | 2026-03-28 00:53:42.805245 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-28 00:53:42.805256 | orchestrator | Saturday 28 March 2026 00:51:19 +0000 (0:00:00.053) 0:00:04.725 ******** 2026-03-28 00:53:42.805266 | orchestrator | ok: [localhost] 2026-03-28 00:53:42.805277 | orchestrator | 2026-03-28 00:53:42.805288 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:53:42.805299 | orchestrator | 2026-03-28 00:53:42.805309 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:53:42.805327 | orchestrator | Saturday 28 March 2026 00:51:20 +0000 (0:00:00.183) 0:00:04.908 ******** 2026-03-28 00:53:42.805339 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:53:42.805349 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:53:42.805360 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:53:42.805371 | orchestrator | 2026-03-28 00:53:42.805381 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:53:42.805392 | orchestrator | Saturday 28 March 2026 00:51:20 +0000 (0:00:00.353) 0:00:05.261 ******** 2026-03-28 00:53:42.805403 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-28 00:53:42.805414 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-28 00:53:42.805428 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-28 00:53:42.805447 | orchestrator | 2026-03-28 00:53:42.805464 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-28 00:53:42.805482 | orchestrator | 2026-03-28 00:53:42.805500 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 00:53:42.805518 | orchestrator | Saturday 28 March 2026 00:51:21 +0000 (0:00:00.861) 0:00:06.123 ******** 2026-03-28 00:53:42.805536 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:53:42.805635 | orchestrator | 2026-03-28 00:53:42.805660 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-28 00:53:42.805679 | orchestrator | Saturday 28 March 2026 00:51:22 +0000 (0:00:00.706) 0:00:06.829 ******** 2026-03-28 00:53:42.805699 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:53:42.805717 | orchestrator | 2026-03-28 00:53:42.805736 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-28 00:53:42.805754 | orchestrator | Saturday 28 March 2026 00:51:23 +0000 (0:00:01.077) 0:00:07.907 ******** 2026-03-28 00:53:42.805773 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:53:42.805891 | orchestrator | 2026-03-28 00:53:42.805918 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-28 00:53:42.805940 | orchestrator | Saturday 28 March 2026 00:51:23 +0000 (0:00:00.411) 0:00:08.319 ******** 2026-03-28 00:53:42.805954 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:53:42.805967 | orchestrator | 2026-03-28 00:53:42.805980 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-28 00:53:42.805993 | orchestrator | Saturday 28 March 2026 00:51:23 +0000 (0:00:00.433) 0:00:08.752 ******** 2026-03-28 00:53:42.806005 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:53:42.806068 | orchestrator | 2026-03-28 00:53:42.806083 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-28 00:53:42.806093 | orchestrator | Saturday 28 March 2026 00:51:24 +0000 (0:00:00.470) 0:00:09.223 ******** 2026-03-28 00:53:42.806102 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:53:42.806112 | orchestrator | 2026-03-28 00:53:42.806122 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 00:53:42.806136 | orchestrator | Saturday 28 March 2026 00:51:25 +0000 (0:00:01.302) 0:00:10.525 ******** 2026-03-28 00:53:42.806155 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:53:42.806174 | orchestrator | 2026-03-28 00:53:42.806190 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-28 00:53:42.806207 | orchestrator | Saturday 28 March 2026 00:51:26 +0000 (0:00:00.883) 0:00:11.408 ******** 2026-03-28 00:53:42.806241 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:53:42.806259 | orchestrator | 2026-03-28 00:53:42.806274 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-28 00:53:42.806284 | orchestrator | Saturday 28 March 2026 00:51:27 +0000 (0:00:01.085) 0:00:12.494 ******** 2026-03-28 00:53:42.806294 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:53:42.806303 | orchestrator | 2026-03-28 00:53:42.806313 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-28 00:53:42.806323 | orchestrator | Saturday 28 March 2026 00:51:29 +0000 (0:00:01.472) 0:00:13.967 ******** 2026-03-28 00:53:42.806333 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:53:42.806343 | orchestrator | 2026-03-28 00:53:42.806373 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-28 00:53:42.806383 | orchestrator | Saturday 28 March 2026 00:51:30 +0000 (0:00:01.103) 0:00:15.070 ******** 2026-03-28 00:53:42.806399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:53:42.806422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:53:42.806443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:53:42.806471 | orchestrator | 2026-03-28 00:53:42.806488 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-28 00:53:42.806506 | orchestrator | Saturday 28 March 2026 00:51:31 +0000 (0:00:01.675) 0:00:16.745 ******** 2026-03-28 00:53:42.806533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:53:42.806557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:53:42.806569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:53:42.806580 | orchestrator | 2026-03-28 00:53:42.806590 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-28 00:53:42.806600 | orchestrator | Saturday 28 March 2026 00:51:33 +0000 (0:00:01.936) 0:00:18.681 ******** 2026-03-28 00:53:42.806611 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 00:53:42.806639 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 00:53:42.806657 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 00:53:42.806673 | orchestrator | 2026-03-28 00:53:42.806688 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-28 00:53:42.806704 | orchestrator | Saturday 28 March 2026 00:51:36 +0000 (0:00:02.164) 0:00:20.845 ******** 2026-03-28 00:53:42.806823 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 00:53:42.806839 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 00:53:42.806849 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 00:53:42.806859 | orchestrator | 2026-03-28 00:53:42.806869 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-28 00:53:42.806878 | orchestrator | Saturday 28 March 2026 00:51:38 +0000 (0:00:02.272) 0:00:23.118 ******** 2026-03-28 00:53:42.806888 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 00:53:42.806898 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 00:53:42.806907 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 00:53:42.806917 | orchestrator | 2026-03-28 00:53:42.806935 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-28 00:53:42.806945 | orchestrator | Saturday 28 March 2026 00:51:40 +0000 (0:00:01.871) 0:00:24.990 ******** 2026-03-28 00:53:42.806955 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 00:53:42.806965 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 00:53:42.806975 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 00:53:42.806984 | orchestrator | 2026-03-28 00:53:42.806994 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-28 00:53:42.807003 | orchestrator | Saturday 28 March 2026 00:51:42 +0000 (0:00:02.754) 0:00:27.745 ******** 2026-03-28 00:53:42.807013 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 00:53:42.807023 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 00:53:42.807050 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 00:53:42.807060 | orchestrator | 2026-03-28 00:53:42.807070 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-28 00:53:42.807080 | orchestrator | Saturday 28 March 2026 00:51:44 +0000 (0:00:01.914) 0:00:29.659 ******** 2026-03-28 00:53:42.807110 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 00:53:42.807120 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 00:53:42.807130 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 00:53:42.807140 | orchestrator | 2026-03-28 00:53:42.807158 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 00:53:42.807169 | orchestrator | Saturday 28 March 2026 00:51:46 +0000 (0:00:01.515) 0:00:31.174 ******** 2026-03-28 00:53:42.807179 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:53:42.807189 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:53:42.807199 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:53:42.807208 | orchestrator | 2026-03-28 00:53:42.807219 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-28 00:53:42.807229 | orchestrator | Saturday 28 March 2026 00:51:46 +0000 (0:00:00.619) 0:00:31.794 ******** 2026-03-28 00:53:42.807249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:53:42.807261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:53:42.807281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:53:42.807293 | orchestrator | 2026-03-28 00:53:42.807303 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-28 00:53:42.807313 | orchestrator | Saturday 28 March 2026 00:51:48 +0000 (0:00:01.720) 0:00:33.515 ******** 2026-03-28 00:53:42.807322 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:53:42.807332 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:53:42.807342 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:53:42.807352 | orchestrator | 2026-03-28 00:53:42.807362 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-28 00:53:42.807371 | orchestrator | Saturday 28 March 2026 00:51:49 +0000 (0:00:01.095) 0:00:34.610 ******** 2026-03-28 00:53:42.807381 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:53:42.807397 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:53:42.807407 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:53:42.807417 | orchestrator | 2026-03-28 00:53:42.807426 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-28 00:53:42.807436 | orchestrator | Saturday 28 March 2026 00:51:57 +0000 (0:00:08.054) 0:00:42.664 ******** 2026-03-28 00:53:42.807446 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:53:42.807456 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:53:42.807466 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:53:42.807476 | orchestrator | 2026-03-28 00:53:42.807485 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 00:53:42.807495 | orchestrator | 2026-03-28 00:53:42.807505 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 00:53:42.807515 | orchestrator | Saturday 28 March 2026 00:51:58 +0000 (0:00:00.749) 0:00:43.413 ******** 2026-03-28 00:53:42.807524 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:53:42.807534 | orchestrator | 2026-03-28 00:53:42.807544 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 00:53:42.807553 | orchestrator | Saturday 28 March 2026 00:51:59 +0000 (0:00:00.655) 0:00:44.069 ******** 2026-03-28 00:53:42.807563 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:53:42.807573 | orchestrator | 2026-03-28 00:53:42.807583 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 00:53:42.807671 | orchestrator | Saturday 28 March 2026 00:51:59 +0000 (0:00:00.300) 0:00:44.370 ******** 2026-03-28 00:53:42.807696 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:53:42.807706 | orchestrator | 2026-03-28 00:53:42.807716 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 00:53:42.807726 | orchestrator | Saturday 28 March 2026 00:52:01 +0000 (0:00:02.387) 0:00:46.757 ******** 2026-03-28 00:53:42.807736 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:53:42.807746 | orchestrator | 2026-03-28 00:53:42.807755 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 00:53:42.807765 | orchestrator | 2026-03-28 00:53:42.807775 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 00:53:42.807784 | orchestrator | Saturday 28 March 2026 00:52:57 +0000 (0:00:55.942) 0:01:42.700 ******** 2026-03-28 00:53:42.807823 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:53:42.807835 | orchestrator | 2026-03-28 00:53:42.807845 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 00:53:42.807855 | orchestrator | Saturday 28 March 2026 00:52:58 +0000 (0:00:00.659) 0:01:43.359 ******** 2026-03-28 00:53:42.807865 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:53:42.807874 | orchestrator | 2026-03-28 00:53:42.807884 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 00:53:42.807894 | orchestrator | Saturday 28 March 2026 00:52:58 +0000 (0:00:00.352) 0:01:43.712 ******** 2026-03-28 00:53:42.807918 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:53:42.807928 | orchestrator | 2026-03-28 00:53:42.807938 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 00:53:42.807947 | orchestrator | Saturday 28 March 2026 00:53:05 +0000 (0:00:07.055) 0:01:50.767 ******** 2026-03-28 00:53:42.807957 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:53:42.807967 | orchestrator | 2026-03-28 00:53:42.807976 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 00:53:42.807986 | orchestrator | 2026-03-28 00:53:42.807996 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 00:53:42.808006 | orchestrator | Saturday 28 March 2026 00:53:16 +0000 (0:00:10.750) 0:02:01.517 ******** 2026-03-28 00:53:42.808016 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:53:42.808025 | orchestrator | 2026-03-28 00:53:42.808035 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 00:53:42.808045 | orchestrator | Saturday 28 March 2026 00:53:17 +0000 (0:00:00.670) 0:02:02.188 ******** 2026-03-28 00:53:42.808062 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:53:42.808072 | orchestrator | 2026-03-28 00:53:42.808082 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 00:53:42.808100 | orchestrator | Saturday 28 March 2026 00:53:17 +0000 (0:00:00.384) 0:02:02.572 ******** 2026-03-28 00:53:42.808110 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:53:42.808120 | orchestrator | 2026-03-28 00:53:42.808130 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 00:53:42.808139 | orchestrator | Saturday 28 March 2026 00:53:24 +0000 (0:00:07.127) 0:02:09.700 ******** 2026-03-28 00:53:42.808149 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:53:42.808159 | orchestrator | 2026-03-28 00:53:42.808169 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-28 00:53:42.808178 | orchestrator | 2026-03-28 00:53:42.808188 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-28 00:53:42.808198 | orchestrator | Saturday 28 March 2026 00:53:36 +0000 (0:00:11.483) 0:02:21.183 ******** 2026-03-28 00:53:42.808208 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:53:42.808217 | orchestrator | 2026-03-28 00:53:42.808227 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-28 00:53:42.808237 | orchestrator | Saturday 28 March 2026 00:53:36 +0000 (0:00:00.522) 0:02:21.705 ******** 2026-03-28 00:53:42.808246 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-28 00:53:42.808256 | orchestrator | enable_outward_rabbitmq_True 2026-03-28 00:53:42.808266 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-28 00:53:42.808276 | orchestrator | outward_rabbitmq_restart 2026-03-28 00:53:42.808285 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:53:42.808295 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:53:42.808305 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:53:42.808314 | orchestrator | 2026-03-28 00:53:42.808324 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-28 00:53:42.808344 | orchestrator | skipping: no hosts matched 2026-03-28 00:53:42.808354 | orchestrator | 2026-03-28 00:53:42.808364 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-28 00:53:42.808374 | orchestrator | skipping: no hosts matched 2026-03-28 00:53:42.808383 | orchestrator | 2026-03-28 00:53:42.808399 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-28 00:53:42.808410 | orchestrator | skipping: no hosts matched 2026-03-28 00:53:42.808419 | orchestrator | 2026-03-28 00:53:42.808429 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:53:42.808440 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-28 00:53:42.808450 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-28 00:53:42.808460 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:53:42.808469 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:53:42.808479 | orchestrator | 2026-03-28 00:53:42.808489 | orchestrator | 2026-03-28 00:53:42.808499 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:53:42.808508 | orchestrator | Saturday 28 March 2026 00:53:39 +0000 (0:00:02.786) 0:02:24.492 ******** 2026-03-28 00:53:42.808518 | orchestrator | =============================================================================== 2026-03-28 00:53:42.808528 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 78.18s 2026-03-28 00:53:42.808537 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 16.57s 2026-03-28 00:53:42.808553 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.06s 2026-03-28 00:53:42.808563 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.17s 2026-03-28 00:53:42.808573 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.79s 2026-03-28 00:53:42.808583 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.75s 2026-03-28 00:53:42.808592 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.27s 2026-03-28 00:53:42.808601 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.16s 2026-03-28 00:53:42.808611 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.99s 2026-03-28 00:53:42.808620 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.94s 2026-03-28 00:53:42.808630 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.91s 2026-03-28 00:53:42.808639 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.87s 2026-03-28 00:53:42.808649 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.72s 2026-03-28 00:53:42.808659 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.68s 2026-03-28 00:53:42.808668 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.52s 2026-03-28 00:53:42.808678 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.47s 2026-03-28 00:53:42.808687 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.30s 2026-03-28 00:53:42.808696 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.10s 2026-03-28 00:53:42.808706 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.10s 2026-03-28 00:53:42.808716 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.09s 2026-03-28 00:53:42.808731 | orchestrator | 2026-03-28 00:53:42 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:42.809676 | orchestrator | 2026-03-28 00:53:42 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:42.809771 | orchestrator | 2026-03-28 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:45.854364 | orchestrator | 2026-03-28 00:53:45 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:45.855097 | orchestrator | 2026-03-28 00:53:45 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:45.856023 | orchestrator | 2026-03-28 00:53:45 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:45.856268 | orchestrator | 2026-03-28 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:48.897457 | orchestrator | 2026-03-28 00:53:48 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:48.897711 | orchestrator | 2026-03-28 00:53:48 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:48.898950 | orchestrator | 2026-03-28 00:53:48 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:48.898989 | orchestrator | 2026-03-28 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:51.944465 | orchestrator | 2026-03-28 00:53:51 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:51.945919 | orchestrator | 2026-03-28 00:53:51 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:51.947401 | orchestrator | 2026-03-28 00:53:51 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:51.947449 | orchestrator | 2026-03-28 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:54.992681 | orchestrator | 2026-03-28 00:53:54 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:54.993924 | orchestrator | 2026-03-28 00:53:54 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:54.995345 | orchestrator | 2026-03-28 00:53:54 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:54.995387 | orchestrator | 2026-03-28 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:58.036198 | orchestrator | 2026-03-28 00:53:58 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:53:58.037301 | orchestrator | 2026-03-28 00:53:58 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:53:58.038474 | orchestrator | 2026-03-28 00:53:58 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:53:58.038550 | orchestrator | 2026-03-28 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:01.103314 | orchestrator | 2026-03-28 00:54:01 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:01.105599 | orchestrator | 2026-03-28 00:54:01 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:01.108250 | orchestrator | 2026-03-28 00:54:01 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:01.108314 | orchestrator | 2026-03-28 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:04.143460 | orchestrator | 2026-03-28 00:54:04 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:04.145664 | orchestrator | 2026-03-28 00:54:04 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:04.148475 | orchestrator | 2026-03-28 00:54:04 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:04.148613 | orchestrator | 2026-03-28 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:07.196253 | orchestrator | 2026-03-28 00:54:07 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:07.198339 | orchestrator | 2026-03-28 00:54:07 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:07.201501 | orchestrator | 2026-03-28 00:54:07 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:07.201892 | orchestrator | 2026-03-28 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:10.251890 | orchestrator | 2026-03-28 00:54:10 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:10.252660 | orchestrator | 2026-03-28 00:54:10 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:10.253133 | orchestrator | 2026-03-28 00:54:10 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:10.253161 | orchestrator | 2026-03-28 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:13.289844 | orchestrator | 2026-03-28 00:54:13 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:13.292180 | orchestrator | 2026-03-28 00:54:13 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:13.292483 | orchestrator | 2026-03-28 00:54:13 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:13.292613 | orchestrator | 2026-03-28 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:16.337567 | orchestrator | 2026-03-28 00:54:16 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:16.339080 | orchestrator | 2026-03-28 00:54:16 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:16.341499 | orchestrator | 2026-03-28 00:54:16 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:16.342679 | orchestrator | 2026-03-28 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:19.390231 | orchestrator | 2026-03-28 00:54:19 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:19.391066 | orchestrator | 2026-03-28 00:54:19 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:19.391846 | orchestrator | 2026-03-28 00:54:19 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:19.391902 | orchestrator | 2026-03-28 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:22.425081 | orchestrator | 2026-03-28 00:54:22 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:22.425379 | orchestrator | 2026-03-28 00:54:22 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:22.428090 | orchestrator | 2026-03-28 00:54:22 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:22.428117 | orchestrator | 2026-03-28 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:25.468053 | orchestrator | 2026-03-28 00:54:25 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:25.471687 | orchestrator | 2026-03-28 00:54:25 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:25.473121 | orchestrator | 2026-03-28 00:54:25 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:25.473164 | orchestrator | 2026-03-28 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:28.511693 | orchestrator | 2026-03-28 00:54:28 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:28.513322 | orchestrator | 2026-03-28 00:54:28 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:28.515899 | orchestrator | 2026-03-28 00:54:28 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:28.515950 | orchestrator | 2026-03-28 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:31.563555 | orchestrator | 2026-03-28 00:54:31 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:31.565926 | orchestrator | 2026-03-28 00:54:31 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:31.569113 | orchestrator | 2026-03-28 00:54:31 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:31.569445 | orchestrator | 2026-03-28 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:34.614438 | orchestrator | 2026-03-28 00:54:34 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:34.614561 | orchestrator | 2026-03-28 00:54:34 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:34.616299 | orchestrator | 2026-03-28 00:54:34 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:34.616631 | orchestrator | 2026-03-28 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:37.646346 | orchestrator | 2026-03-28 00:54:37 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:37.650390 | orchestrator | 2026-03-28 00:54:37 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:37.652018 | orchestrator | 2026-03-28 00:54:37 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:37.652054 | orchestrator | 2026-03-28 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:40.702275 | orchestrator | 2026-03-28 00:54:40 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:40.705684 | orchestrator | 2026-03-28 00:54:40 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:40.707014 | orchestrator | 2026-03-28 00:54:40 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:40.707051 | orchestrator | 2026-03-28 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:43.742375 | orchestrator | 2026-03-28 00:54:43 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:43.742586 | orchestrator | 2026-03-28 00:54:43 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:43.743512 | orchestrator | 2026-03-28 00:54:43 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:43.743622 | orchestrator | 2026-03-28 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:46.797571 | orchestrator | 2026-03-28 00:54:46 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:46.799857 | orchestrator | 2026-03-28 00:54:46 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:46.800621 | orchestrator | 2026-03-28 00:54:46 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state STARTED 2026-03-28 00:54:46.800668 | orchestrator | 2026-03-28 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:49.841090 | orchestrator | 2026-03-28 00:54:49 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:49.842550 | orchestrator | 2026-03-28 00:54:49 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:49.845839 | orchestrator | 2026-03-28 00:54:49 | INFO  | Task 8a4bc7f9-e72f-4ac0-afde-1f1a352df517 is in state SUCCESS 2026-03-28 00:54:49.848066 | orchestrator | 2026-03-28 00:54:49.848128 | orchestrator | 2026-03-28 00:54:49.848142 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:54:49.848155 | orchestrator | 2026-03-28 00:54:49.848166 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:54:49.848177 | orchestrator | Saturday 28 March 2026 00:52:08 +0000 (0:00:00.245) 0:00:00.245 ******** 2026-03-28 00:54:49.848235 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:54:49.848248 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:54:49.848259 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:54:49.848270 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.848281 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.848292 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.848318 | orchestrator | 2026-03-28 00:54:49.848341 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:54:49.848352 | orchestrator | Saturday 28 March 2026 00:52:10 +0000 (0:00:01.491) 0:00:01.737 ******** 2026-03-28 00:54:49.848430 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-28 00:54:49.848444 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-28 00:54:49.848475 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-28 00:54:49.848497 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-28 00:54:49.848509 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-28 00:54:49.848520 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-28 00:54:49.848559 | orchestrator | 2026-03-28 00:54:49.848571 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-28 00:54:49.848582 | orchestrator | 2026-03-28 00:54:49.848593 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-28 00:54:49.848604 | orchestrator | Saturday 28 March 2026 00:52:11 +0000 (0:00:01.644) 0:00:03.382 ******** 2026-03-28 00:54:49.848617 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:54:49.848630 | orchestrator | 2026-03-28 00:54:49.848641 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-28 00:54:49.848652 | orchestrator | Saturday 28 March 2026 00:52:13 +0000 (0:00:01.349) 0:00:04.732 ******** 2026-03-28 00:54:49.848665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.848707 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.848727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.848745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.848778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.848806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.848818 | orchestrator | 2026-03-28 00:54:49.848829 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-28 00:54:49.848840 | orchestrator | Saturday 28 March 2026 00:52:15 +0000 (0:00:01.816) 0:00:06.548 ******** 2026-03-28 00:54:49.848886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.848910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.848921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.848933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.848944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.848955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.848966 | orchestrator | 2026-03-28 00:54:49.848977 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-28 00:54:49.848988 | orchestrator | Saturday 28 March 2026 00:52:17 +0000 (0:00:02.822) 0:00:09.371 ******** 2026-03-28 00:54:49.849005 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849017 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849036 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849089 | orchestrator | 2026-03-28 00:54:49.849100 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-28 00:54:49.849111 | orchestrator | Saturday 28 March 2026 00:52:20 +0000 (0:00:02.264) 0:00:11.635 ******** 2026-03-28 00:54:49.849122 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849209 | orchestrator | 2026-03-28 00:54:49.849219 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-28 00:54:49.849230 | orchestrator | Saturday 28 March 2026 00:52:22 +0000 (0:00:02.453) 0:00:14.089 ******** 2026-03-28 00:54:49.849242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849265 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.849309 | orchestrator | 2026-03-28 00:54:49.849320 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-28 00:54:49.849331 | orchestrator | Saturday 28 March 2026 00:52:25 +0000 (0:00:02.597) 0:00:16.686 ******** 2026-03-28 00:54:49.849342 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:54:49.849361 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:54:49.849372 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:54:49.849389 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:49.849400 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:49.849411 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:49.849422 | orchestrator | 2026-03-28 00:54:49.849433 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-28 00:54:49.849444 | orchestrator | Saturday 28 March 2026 00:52:28 +0000 (0:00:03.409) 0:00:20.096 ******** 2026-03-28 00:54:49.849455 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-28 00:54:49.849466 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-28 00:54:49.849477 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-28 00:54:49.849493 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-28 00:54:49.849504 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-28 00:54:49.849514 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-28 00:54:49.849525 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:49.849536 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:49.849547 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:49.849557 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:49.849568 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:49.849579 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:49.849590 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 00:54:49.849602 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 00:54:49.849613 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 00:54:49.849624 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 00:54:49.849635 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 00:54:49.849646 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 00:54:49.849657 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:49.849669 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:49.849768 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:49.849781 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:49.849792 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:49.849803 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:49.849813 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:49.849824 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:49.849842 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:49.849852 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:49.849862 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:49.849871 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:49.849881 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:49.849891 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:49.849900 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:49.849910 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:49.849920 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:49.849930 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:49.849945 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 00:54:49.849955 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 00:54:49.849965 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 00:54:49.849975 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 00:54:49.849991 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 00:54:49.850001 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 00:54:49.850011 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-28 00:54:49.850077 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-28 00:54:49.850088 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-28 00:54:49.850097 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-28 00:54:49.850107 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-28 00:54:49.850117 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-28 00:54:49.850126 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 00:54:49.850136 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 00:54:49.850146 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 00:54:49.850156 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 00:54:49.850165 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 00:54:49.850175 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 00:54:49.850195 | orchestrator | 2026-03-28 00:54:49.850205 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:49.850215 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:20.838) 0:00:40.934 ******** 2026-03-28 00:54:49.850225 | orchestrator | 2026-03-28 00:54:49.850235 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:49.850244 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:00.085) 0:00:41.019 ******** 2026-03-28 00:54:49.850254 | orchestrator | 2026-03-28 00:54:49.850264 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:49.850273 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:00.071) 0:00:41.091 ******** 2026-03-28 00:54:49.850283 | orchestrator | 2026-03-28 00:54:49.850292 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:49.850302 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:00.077) 0:00:41.168 ******** 2026-03-28 00:54:49.850311 | orchestrator | 2026-03-28 00:54:49.850321 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:49.850330 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:00.077) 0:00:41.246 ******** 2026-03-28 00:54:49.850340 | orchestrator | 2026-03-28 00:54:49.850349 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:49.850359 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:00.078) 0:00:41.325 ******** 2026-03-28 00:54:49.850368 | orchestrator | 2026-03-28 00:54:49.850378 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-28 00:54:49.850387 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:00.096) 0:00:41.422 ******** 2026-03-28 00:54:49.850397 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:54:49.850407 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:54:49.850417 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.850426 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.850436 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:54:49.850445 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.850455 | orchestrator | 2026-03-28 00:54:49.850465 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-28 00:54:49.850474 | orchestrator | Saturday 28 March 2026 00:52:51 +0000 (0:00:01.938) 0:00:43.360 ******** 2026-03-28 00:54:49.850484 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:49.850493 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:49.850503 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:54:49.850513 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:54:49.850522 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:54:49.850532 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:49.850541 | orchestrator | 2026-03-28 00:54:49.850555 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-28 00:54:49.850565 | orchestrator | 2026-03-28 00:54:49.850575 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 00:54:49.850585 | orchestrator | Saturday 28 March 2026 00:53:25 +0000 (0:00:33.501) 0:01:16.862 ******** 2026-03-28 00:54:49.850594 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:54:49.850604 | orchestrator | 2026-03-28 00:54:49.850613 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 00:54:49.850623 | orchestrator | Saturday 28 March 2026 00:53:26 +0000 (0:00:00.802) 0:01:17.664 ******** 2026-03-28 00:54:49.850632 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:54:49.850642 | orchestrator | 2026-03-28 00:54:49.850658 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-28 00:54:49.850668 | orchestrator | Saturday 28 March 2026 00:53:26 +0000 (0:00:00.632) 0:01:18.297 ******** 2026-03-28 00:54:49.850704 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.850729 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.850739 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.850749 | orchestrator | 2026-03-28 00:54:49.850758 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-28 00:54:49.850768 | orchestrator | Saturday 28 March 2026 00:53:27 +0000 (0:00:01.054) 0:01:19.352 ******** 2026-03-28 00:54:49.850778 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.850787 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.850797 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.850806 | orchestrator | 2026-03-28 00:54:49.850816 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-28 00:54:49.850826 | orchestrator | Saturday 28 March 2026 00:53:28 +0000 (0:00:00.359) 0:01:19.712 ******** 2026-03-28 00:54:49.850835 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.850845 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.850854 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.850863 | orchestrator | 2026-03-28 00:54:49.850873 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-28 00:54:49.850883 | orchestrator | Saturday 28 March 2026 00:53:28 +0000 (0:00:00.387) 0:01:20.099 ******** 2026-03-28 00:54:49.850892 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.850902 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.850911 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.850921 | orchestrator | 2026-03-28 00:54:49.850930 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-28 00:54:49.850940 | orchestrator | Saturday 28 March 2026 00:53:29 +0000 (0:00:00.445) 0:01:20.545 ******** 2026-03-28 00:54:49.850949 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.850958 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.850968 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.850977 | orchestrator | 2026-03-28 00:54:49.850987 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-28 00:54:49.850996 | orchestrator | Saturday 28 March 2026 00:53:29 +0000 (0:00:00.623) 0:01:21.168 ******** 2026-03-28 00:54:49.851006 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851016 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851025 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851034 | orchestrator | 2026-03-28 00:54:49.851044 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-28 00:54:49.851053 | orchestrator | Saturday 28 March 2026 00:53:30 +0000 (0:00:00.297) 0:01:21.466 ******** 2026-03-28 00:54:49.851063 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851072 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851081 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851091 | orchestrator | 2026-03-28 00:54:49.851101 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-28 00:54:49.851110 | orchestrator | Saturday 28 March 2026 00:53:30 +0000 (0:00:00.306) 0:01:21.772 ******** 2026-03-28 00:54:49.851120 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851129 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851139 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851148 | orchestrator | 2026-03-28 00:54:49.851158 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-28 00:54:49.851167 | orchestrator | Saturday 28 March 2026 00:53:30 +0000 (0:00:00.330) 0:01:22.103 ******** 2026-03-28 00:54:49.851177 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851186 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851196 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851205 | orchestrator | 2026-03-28 00:54:49.851215 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-28 00:54:49.851225 | orchestrator | Saturday 28 March 2026 00:53:31 +0000 (0:00:00.542) 0:01:22.646 ******** 2026-03-28 00:54:49.851234 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851244 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851253 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851269 | orchestrator | 2026-03-28 00:54:49.851278 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-28 00:54:49.851288 | orchestrator | Saturday 28 March 2026 00:53:31 +0000 (0:00:00.297) 0:01:22.943 ******** 2026-03-28 00:54:49.851297 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851307 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851316 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851326 | orchestrator | 2026-03-28 00:54:49.851336 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-28 00:54:49.851345 | orchestrator | Saturday 28 March 2026 00:53:31 +0000 (0:00:00.315) 0:01:23.259 ******** 2026-03-28 00:54:49.851355 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851364 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851374 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851383 | orchestrator | 2026-03-28 00:54:49.851393 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-28 00:54:49.851402 | orchestrator | Saturday 28 March 2026 00:53:32 +0000 (0:00:00.325) 0:01:23.584 ******** 2026-03-28 00:54:49.851412 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851421 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851436 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851446 | orchestrator | 2026-03-28 00:54:49.851455 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-28 00:54:49.851465 | orchestrator | Saturday 28 March 2026 00:53:32 +0000 (0:00:00.550) 0:01:24.135 ******** 2026-03-28 00:54:49.851474 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851484 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851493 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851503 | orchestrator | 2026-03-28 00:54:49.851512 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-28 00:54:49.851522 | orchestrator | Saturday 28 March 2026 00:53:33 +0000 (0:00:00.354) 0:01:24.489 ******** 2026-03-28 00:54:49.851531 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851541 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851550 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851560 | orchestrator | 2026-03-28 00:54:49.851575 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-28 00:54:49.851585 | orchestrator | Saturday 28 March 2026 00:53:33 +0000 (0:00:00.340) 0:01:24.830 ******** 2026-03-28 00:54:49.851595 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851604 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851614 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851623 | orchestrator | 2026-03-28 00:54:49.851633 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-28 00:54:49.851642 | orchestrator | Saturday 28 March 2026 00:53:33 +0000 (0:00:00.308) 0:01:25.139 ******** 2026-03-28 00:54:49.851652 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851661 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851671 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851703 | orchestrator | 2026-03-28 00:54:49.851713 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 00:54:49.851723 | orchestrator | Saturday 28 March 2026 00:53:34 +0000 (0:00:00.557) 0:01:25.696 ******** 2026-03-28 00:54:49.851733 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:54:49.851742 | orchestrator | 2026-03-28 00:54:49.851752 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-28 00:54:49.851762 | orchestrator | Saturday 28 March 2026 00:53:34 +0000 (0:00:00.655) 0:01:26.352 ******** 2026-03-28 00:54:49.851771 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.851780 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.851790 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.851800 | orchestrator | 2026-03-28 00:54:49.851809 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-28 00:54:49.851826 | orchestrator | Saturday 28 March 2026 00:53:35 +0000 (0:00:00.668) 0:01:27.020 ******** 2026-03-28 00:54:49.851836 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.851845 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.851855 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.851864 | orchestrator | 2026-03-28 00:54:49.851874 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-28 00:54:49.851884 | orchestrator | Saturday 28 March 2026 00:53:36 +0000 (0:00:00.561) 0:01:27.582 ******** 2026-03-28 00:54:49.851893 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851903 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851912 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851922 | orchestrator | 2026-03-28 00:54:49.851932 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-28 00:54:49.851941 | orchestrator | Saturday 28 March 2026 00:53:36 +0000 (0:00:00.611) 0:01:28.194 ******** 2026-03-28 00:54:49.851951 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.851960 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.851970 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.851979 | orchestrator | 2026-03-28 00:54:49.851989 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-28 00:54:49.851998 | orchestrator | Saturday 28 March 2026 00:53:37 +0000 (0:00:00.413) 0:01:28.607 ******** 2026-03-28 00:54:49.852008 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.852017 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.852027 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.852036 | orchestrator | 2026-03-28 00:54:49.852046 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-28 00:54:49.852056 | orchestrator | Saturday 28 March 2026 00:53:37 +0000 (0:00:00.458) 0:01:29.066 ******** 2026-03-28 00:54:49.852065 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.852075 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.852084 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.852094 | orchestrator | 2026-03-28 00:54:49.852103 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-28 00:54:49.852113 | orchestrator | Saturday 28 March 2026 00:53:37 +0000 (0:00:00.371) 0:01:29.437 ******** 2026-03-28 00:54:49.852123 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.852132 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.852141 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.852151 | orchestrator | 2026-03-28 00:54:49.852160 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-28 00:54:49.852170 | orchestrator | Saturday 28 March 2026 00:53:38 +0000 (0:00:00.587) 0:01:30.025 ******** 2026-03-28 00:54:49.852180 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.852189 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.852199 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.852208 | orchestrator | 2026-03-28 00:54:49.852218 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-28 00:54:49.852227 | orchestrator | Saturday 28 March 2026 00:53:38 +0000 (0:00:00.322) 0:01:30.347 ******** 2026-03-28 00:54:49.852242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852360 | orchestrator | 2026-03-28 00:54:49.852370 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-28 00:54:49.852380 | orchestrator | Saturday 28 March 2026 00:53:40 +0000 (0:00:01.613) 0:01:31.961 ******** 2026-03-28 00:54:49.852390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852496 | orchestrator | 2026-03-28 00:54:49.852505 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-28 00:54:49.852515 | orchestrator | Saturday 28 March 2026 00:53:44 +0000 (0:00:04.189) 0:01:36.151 ******** 2026-03-28 00:54:49.852525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.852632 | orchestrator | 2026-03-28 00:54:49.852642 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:49.852652 | orchestrator | Saturday 28 March 2026 00:53:48 +0000 (0:00:03.398) 0:01:39.550 ******** 2026-03-28 00:54:49.852661 | orchestrator | 2026-03-28 00:54:49.852671 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:49.852706 | orchestrator | Saturday 28 March 2026 00:53:48 +0000 (0:00:00.071) 0:01:39.621 ******** 2026-03-28 00:54:49.852723 | orchestrator | 2026-03-28 00:54:49.852740 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:49.852756 | orchestrator | Saturday 28 March 2026 00:53:48 +0000 (0:00:00.081) 0:01:39.702 ******** 2026-03-28 00:54:49.852778 | orchestrator | 2026-03-28 00:54:49.852788 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-28 00:54:49.852797 | orchestrator | Saturday 28 March 2026 00:53:48 +0000 (0:00:00.096) 0:01:39.799 ******** 2026-03-28 00:54:49.852807 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:49.852816 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:49.852826 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:49.852836 | orchestrator | 2026-03-28 00:54:49.852845 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-28 00:54:49.852855 | orchestrator | Saturday 28 March 2026 00:53:55 +0000 (0:00:07.610) 0:01:47.409 ******** 2026-03-28 00:54:49.852865 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:49.852874 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:49.852883 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:49.852893 | orchestrator | 2026-03-28 00:54:49.852902 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-28 00:54:49.852912 | orchestrator | Saturday 28 March 2026 00:53:58 +0000 (0:00:02.776) 0:01:50.186 ******** 2026-03-28 00:54:49.852921 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:49.852931 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:49.852941 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:49.852950 | orchestrator | 2026-03-28 00:54:49.852960 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-28 00:54:49.852970 | orchestrator | Saturday 28 March 2026 00:54:07 +0000 (0:00:08.802) 0:01:58.988 ******** 2026-03-28 00:54:49.852979 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.852989 | orchestrator | 2026-03-28 00:54:49.852998 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-28 00:54:49.853008 | orchestrator | Saturday 28 March 2026 00:54:07 +0000 (0:00:00.142) 0:01:59.131 ******** 2026-03-28 00:54:49.853017 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.853027 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.853036 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.853046 | orchestrator | 2026-03-28 00:54:49.853062 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-28 00:54:49.853072 | orchestrator | Saturday 28 March 2026 00:54:08 +0000 (0:00:00.897) 0:02:00.029 ******** 2026-03-28 00:54:49.853081 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.853091 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.853101 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:49.853111 | orchestrator | 2026-03-28 00:54:49.853120 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-28 00:54:49.853130 | orchestrator | Saturday 28 March 2026 00:54:09 +0000 (0:00:00.705) 0:02:00.735 ******** 2026-03-28 00:54:49.853139 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.853149 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.853159 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.853168 | orchestrator | 2026-03-28 00:54:49.853243 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-28 00:54:49.853262 | orchestrator | Saturday 28 March 2026 00:54:10 +0000 (0:00:00.966) 0:02:01.702 ******** 2026-03-28 00:54:49.853271 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.853281 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.853290 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:49.853300 | orchestrator | 2026-03-28 00:54:49.853309 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-28 00:54:49.853319 | orchestrator | Saturday 28 March 2026 00:54:10 +0000 (0:00:00.655) 0:02:02.357 ******** 2026-03-28 00:54:49.853328 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.853338 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.853347 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.853357 | orchestrator | 2026-03-28 00:54:49.853366 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-28 00:54:49.853376 | orchestrator | Saturday 28 March 2026 00:54:11 +0000 (0:00:00.884) 0:02:03.242 ******** 2026-03-28 00:54:49.853392 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.853401 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.853411 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.853420 | orchestrator | 2026-03-28 00:54:49.853430 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-28 00:54:49.853439 | orchestrator | Saturday 28 March 2026 00:54:12 +0000 (0:00:00.837) 0:02:04.079 ******** 2026-03-28 00:54:49.853448 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.853458 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.853467 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.853477 | orchestrator | 2026-03-28 00:54:49.853486 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-28 00:54:49.853496 | orchestrator | Saturday 28 March 2026 00:54:12 +0000 (0:00:00.319) 0:02:04.399 ******** 2026-03-28 00:54:49.853506 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853516 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853526 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853536 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853552 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853562 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853579 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853590 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853606 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853615 | orchestrator | 2026-03-28 00:54:49.853625 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-28 00:54:49.853635 | orchestrator | Saturday 28 March 2026 00:54:14 +0000 (0:00:01.889) 0:02:06.289 ******** 2026-03-28 00:54:49.853645 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853655 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853665 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853704 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853753 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853789 | orchestrator | 2026-03-28 00:54:49.853799 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-28 00:54:49.853809 | orchestrator | Saturday 28 March 2026 00:54:19 +0000 (0:00:04.316) 0:02:10.605 ******** 2026-03-28 00:54:49.853819 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853829 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853839 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853849 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853899 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:49.853924 | orchestrator | 2026-03-28 00:54:49.853934 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:49.853944 | orchestrator | Saturday 28 March 2026 00:54:22 +0000 (0:00:03.091) 0:02:13.696 ******** 2026-03-28 00:54:49.853954 | orchestrator | 2026-03-28 00:54:49.853963 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:49.853973 | orchestrator | Saturday 28 March 2026 00:54:22 +0000 (0:00:00.063) 0:02:13.760 ******** 2026-03-28 00:54:49.853982 | orchestrator | 2026-03-28 00:54:49.853992 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:49.854001 | orchestrator | Saturday 28 March 2026 00:54:22 +0000 (0:00:00.064) 0:02:13.824 ******** 2026-03-28 00:54:49.854011 | orchestrator | 2026-03-28 00:54:49.854068 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-28 00:54:49.854088 | orchestrator | Saturday 28 March 2026 00:54:22 +0000 (0:00:00.068) 0:02:13.893 ******** 2026-03-28 00:54:49.854098 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:49.854108 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:49.854118 | orchestrator | 2026-03-28 00:54:49.854128 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-28 00:54:49.854137 | orchestrator | Saturday 28 March 2026 00:54:28 +0000 (0:00:06.171) 0:02:20.064 ******** 2026-03-28 00:54:49.854147 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:49.854156 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:49.854166 | orchestrator | 2026-03-28 00:54:49.854176 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-28 00:54:49.854186 | orchestrator | Saturday 28 March 2026 00:54:35 +0000 (0:00:06.983) 0:02:27.048 ******** 2026-03-28 00:54:49.854195 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:49.854205 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:49.854214 | orchestrator | 2026-03-28 00:54:49.854224 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-28 00:54:49.854234 | orchestrator | Saturday 28 March 2026 00:54:42 +0000 (0:00:06.459) 0:02:33.507 ******** 2026-03-28 00:54:49.854244 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:49.854253 | orchestrator | 2026-03-28 00:54:49.854263 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-28 00:54:49.854273 | orchestrator | Saturday 28 March 2026 00:54:42 +0000 (0:00:00.142) 0:02:33.650 ******** 2026-03-28 00:54:49.854282 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.854292 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.854301 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.854311 | orchestrator | 2026-03-28 00:54:49.854320 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-28 00:54:49.854330 | orchestrator | Saturday 28 March 2026 00:54:43 +0000 (0:00:00.809) 0:02:34.459 ******** 2026-03-28 00:54:49.854340 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.854349 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.854359 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:49.854368 | orchestrator | 2026-03-28 00:54:49.854378 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-28 00:54:49.854396 | orchestrator | Saturday 28 March 2026 00:54:43 +0000 (0:00:00.602) 0:02:35.062 ******** 2026-03-28 00:54:49.854411 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.854425 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.854435 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.854444 | orchestrator | 2026-03-28 00:54:49.854455 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-28 00:54:49.854465 | orchestrator | Saturday 28 March 2026 00:54:44 +0000 (0:00:00.828) 0:02:35.890 ******** 2026-03-28 00:54:49.854474 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:49.854484 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:49.854493 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:49.854503 | orchestrator | 2026-03-28 00:54:49.854512 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-28 00:54:49.854522 | orchestrator | Saturday 28 March 2026 00:54:45 +0000 (0:00:00.629) 0:02:36.520 ******** 2026-03-28 00:54:49.854532 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.854541 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.854551 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.854561 | orchestrator | 2026-03-28 00:54:49.854570 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-28 00:54:49.854585 | orchestrator | Saturday 28 March 2026 00:54:45 +0000 (0:00:00.795) 0:02:37.315 ******** 2026-03-28 00:54:49.854595 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:49.854605 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:49.854614 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:49.854624 | orchestrator | 2026-03-28 00:54:49.854634 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:54:49.854643 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-28 00:54:49.854654 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-28 00:54:49.854671 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-28 00:54:49.854709 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:54:49.854720 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:54:49.854730 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:54:49.854739 | orchestrator | 2026-03-28 00:54:49.854749 | orchestrator | 2026-03-28 00:54:49.854759 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:54:49.854769 | orchestrator | Saturday 28 March 2026 00:54:46 +0000 (0:00:00.922) 0:02:38.237 ******** 2026-03-28 00:54:49.854778 | orchestrator | =============================================================================== 2026-03-28 00:54:49.854788 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 33.50s 2026-03-28 00:54:49.854797 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.84s 2026-03-28 00:54:49.854807 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.26s 2026-03-28 00:54:49.854817 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.78s 2026-03-28 00:54:49.854826 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.76s 2026-03-28 00:54:49.854836 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.32s 2026-03-28 00:54:49.854845 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.19s 2026-03-28 00:54:49.854855 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.41s 2026-03-28 00:54:49.854871 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.40s 2026-03-28 00:54:49.854881 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.09s 2026-03-28 00:54:49.854890 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.82s 2026-03-28 00:54:49.854900 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.60s 2026-03-28 00:54:49.854909 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.45s 2026-03-28 00:54:49.854919 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.26s 2026-03-28 00:54:49.854928 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.94s 2026-03-28 00:54:49.854938 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.89s 2026-03-28 00:54:49.854947 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.82s 2026-03-28 00:54:49.854957 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.64s 2026-03-28 00:54:49.854966 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.61s 2026-03-28 00:54:49.854976 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.50s 2026-03-28 00:54:49.854985 | orchestrator | 2026-03-28 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:52.893207 | orchestrator | 2026-03-28 00:54:52 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:52.895305 | orchestrator | 2026-03-28 00:54:52 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:52.895406 | orchestrator | 2026-03-28 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:55.945951 | orchestrator | 2026-03-28 00:54:55 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:55.947084 | orchestrator | 2026-03-28 00:54:55 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:55.947119 | orchestrator | 2026-03-28 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:59.012390 | orchestrator | 2026-03-28 00:54:59 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:54:59.014502 | orchestrator | 2026-03-28 00:54:59 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:54:59.014550 | orchestrator | 2026-03-28 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:02.057639 | orchestrator | 2026-03-28 00:55:02 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:02.062165 | orchestrator | 2026-03-28 00:55:02 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:02.062250 | orchestrator | 2026-03-28 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:05.102570 | orchestrator | 2026-03-28 00:55:05 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:05.102744 | orchestrator | 2026-03-28 00:55:05 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:05.102763 | orchestrator | 2026-03-28 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:08.154143 | orchestrator | 2026-03-28 00:55:08 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:08.156040 | orchestrator | 2026-03-28 00:55:08 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:08.156111 | orchestrator | 2026-03-28 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:11.200502 | orchestrator | 2026-03-28 00:55:11 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:11.201087 | orchestrator | 2026-03-28 00:55:11 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:11.201128 | orchestrator | 2026-03-28 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:14.249270 | orchestrator | 2026-03-28 00:55:14 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:14.250549 | orchestrator | 2026-03-28 00:55:14 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:14.250600 | orchestrator | 2026-03-28 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:17.300508 | orchestrator | 2026-03-28 00:55:17 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:17.302062 | orchestrator | 2026-03-28 00:55:17 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:17.302123 | orchestrator | 2026-03-28 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:20.347103 | orchestrator | 2026-03-28 00:55:20 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:20.347839 | orchestrator | 2026-03-28 00:55:20 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:20.347874 | orchestrator | 2026-03-28 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:23.392337 | orchestrator | 2026-03-28 00:55:23 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:23.392423 | orchestrator | 2026-03-28 00:55:23 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:23.392433 | orchestrator | 2026-03-28 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:26.435364 | orchestrator | 2026-03-28 00:55:26 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:26.438271 | orchestrator | 2026-03-28 00:55:26 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:26.438609 | orchestrator | 2026-03-28 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:29.472412 | orchestrator | 2026-03-28 00:55:29 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:29.472697 | orchestrator | 2026-03-28 00:55:29 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:29.472721 | orchestrator | 2026-03-28 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:32.522158 | orchestrator | 2026-03-28 00:55:32 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:32.523267 | orchestrator | 2026-03-28 00:55:32 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:32.523771 | orchestrator | 2026-03-28 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:35.560694 | orchestrator | 2026-03-28 00:55:35 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:35.562479 | orchestrator | 2026-03-28 00:55:35 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:35.562841 | orchestrator | 2026-03-28 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:38.607210 | orchestrator | 2026-03-28 00:55:38 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:38.608707 | orchestrator | 2026-03-28 00:55:38 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:38.608754 | orchestrator | 2026-03-28 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:41.658374 | orchestrator | 2026-03-28 00:55:41 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:41.658870 | orchestrator | 2026-03-28 00:55:41 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:41.658912 | orchestrator | 2026-03-28 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:44.699150 | orchestrator | 2026-03-28 00:55:44 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:44.699797 | orchestrator | 2026-03-28 00:55:44 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:44.699837 | orchestrator | 2026-03-28 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:47.737482 | orchestrator | 2026-03-28 00:55:47 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:47.739950 | orchestrator | 2026-03-28 00:55:47 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:47.740043 | orchestrator | 2026-03-28 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:50.792533 | orchestrator | 2026-03-28 00:55:50 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:50.793097 | orchestrator | 2026-03-28 00:55:50 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:50.793163 | orchestrator | 2026-03-28 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:53.841534 | orchestrator | 2026-03-28 00:55:53 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:53.843224 | orchestrator | 2026-03-28 00:55:53 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:53.845839 | orchestrator | 2026-03-28 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:56.897409 | orchestrator | 2026-03-28 00:55:56 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:56.899487 | orchestrator | 2026-03-28 00:55:56 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:56.899536 | orchestrator | 2026-03-28 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:59.945810 | orchestrator | 2026-03-28 00:55:59 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:55:59.947515 | orchestrator | 2026-03-28 00:55:59 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:55:59.947622 | orchestrator | 2026-03-28 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:02.990758 | orchestrator | 2026-03-28 00:56:02 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:02.993332 | orchestrator | 2026-03-28 00:56:02 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:02.993398 | orchestrator | 2026-03-28 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:06.045367 | orchestrator | 2026-03-28 00:56:06 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:06.045777 | orchestrator | 2026-03-28 00:56:06 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:06.045812 | orchestrator | 2026-03-28 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:09.084436 | orchestrator | 2026-03-28 00:56:09 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:09.085934 | orchestrator | 2026-03-28 00:56:09 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:09.086142 | orchestrator | 2026-03-28 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:12.133078 | orchestrator | 2026-03-28 00:56:12 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:12.133734 | orchestrator | 2026-03-28 00:56:12 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:12.133901 | orchestrator | 2026-03-28 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:15.171204 | orchestrator | 2026-03-28 00:56:15 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:15.173655 | orchestrator | 2026-03-28 00:56:15 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:15.174194 | orchestrator | 2026-03-28 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:18.219698 | orchestrator | 2026-03-28 00:56:18 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:18.221409 | orchestrator | 2026-03-28 00:56:18 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:18.221459 | orchestrator | 2026-03-28 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:21.272242 | orchestrator | 2026-03-28 00:56:21 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:21.273207 | orchestrator | 2026-03-28 00:56:21 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:21.273261 | orchestrator | 2026-03-28 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:24.333052 | orchestrator | 2026-03-28 00:56:24 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:24.336874 | orchestrator | 2026-03-28 00:56:24 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:24.336950 | orchestrator | 2026-03-28 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:27.400901 | orchestrator | 2026-03-28 00:56:27 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:27.404409 | orchestrator | 2026-03-28 00:56:27 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:27.404706 | orchestrator | 2026-03-28 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:30.454684 | orchestrator | 2026-03-28 00:56:30 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:30.455159 | orchestrator | 2026-03-28 00:56:30 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:30.455185 | orchestrator | 2026-03-28 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:33.502896 | orchestrator | 2026-03-28 00:56:33 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:33.504346 | orchestrator | 2026-03-28 00:56:33 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:33.504410 | orchestrator | 2026-03-28 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:36.550285 | orchestrator | 2026-03-28 00:56:36 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:36.552149 | orchestrator | 2026-03-28 00:56:36 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:36.552634 | orchestrator | 2026-03-28 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:39.598310 | orchestrator | 2026-03-28 00:56:39 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:39.600273 | orchestrator | 2026-03-28 00:56:39 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:39.600370 | orchestrator | 2026-03-28 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:42.641338 | orchestrator | 2026-03-28 00:56:42 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:42.643244 | orchestrator | 2026-03-28 00:56:42 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:42.643350 | orchestrator | 2026-03-28 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:45.680257 | orchestrator | 2026-03-28 00:56:45 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:45.681135 | orchestrator | 2026-03-28 00:56:45 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:45.681212 | orchestrator | 2026-03-28 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:48.732902 | orchestrator | 2026-03-28 00:56:48 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:48.734876 | orchestrator | 2026-03-28 00:56:48 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:48.734939 | orchestrator | 2026-03-28 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:51.773781 | orchestrator | 2026-03-28 00:56:51 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:51.775677 | orchestrator | 2026-03-28 00:56:51 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:51.776240 | orchestrator | 2026-03-28 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:54.813532 | orchestrator | 2026-03-28 00:56:54 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:54.814890 | orchestrator | 2026-03-28 00:56:54 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:54.815101 | orchestrator | 2026-03-28 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:57.859766 | orchestrator | 2026-03-28 00:56:57 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:56:57.861659 | orchestrator | 2026-03-28 00:56:57 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:56:57.861741 | orchestrator | 2026-03-28 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:00.907610 | orchestrator | 2026-03-28 00:57:00 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:00.909932 | orchestrator | 2026-03-28 00:57:00 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:00.909973 | orchestrator | 2026-03-28 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:03.954923 | orchestrator | 2026-03-28 00:57:03 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:03.956662 | orchestrator | 2026-03-28 00:57:03 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:03.959614 | orchestrator | 2026-03-28 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:07.014352 | orchestrator | 2026-03-28 00:57:07 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:07.017892 | orchestrator | 2026-03-28 00:57:07 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:07.017970 | orchestrator | 2026-03-28 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:10.053430 | orchestrator | 2026-03-28 00:57:10 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:10.054327 | orchestrator | 2026-03-28 00:57:10 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:10.054354 | orchestrator | 2026-03-28 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:13.091567 | orchestrator | 2026-03-28 00:57:13 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:13.091651 | orchestrator | 2026-03-28 00:57:13 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:13.091664 | orchestrator | 2026-03-28 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:16.135341 | orchestrator | 2026-03-28 00:57:16 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:16.138779 | orchestrator | 2026-03-28 00:57:16 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:16.138843 | orchestrator | 2026-03-28 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:19.186918 | orchestrator | 2026-03-28 00:57:19 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:19.190665 | orchestrator | 2026-03-28 00:57:19 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:19.190823 | orchestrator | 2026-03-28 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:22.228538 | orchestrator | 2026-03-28 00:57:22 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:22.230317 | orchestrator | 2026-03-28 00:57:22 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:22.230385 | orchestrator | 2026-03-28 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:25.264173 | orchestrator | 2026-03-28 00:57:25 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:25.265958 | orchestrator | 2026-03-28 00:57:25 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:25.266014 | orchestrator | 2026-03-28 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:28.320842 | orchestrator | 2026-03-28 00:57:28 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:28.321554 | orchestrator | 2026-03-28 00:57:28 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:28.321783 | orchestrator | 2026-03-28 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:31.366884 | orchestrator | 2026-03-28 00:57:31 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:31.366962 | orchestrator | 2026-03-28 00:57:31 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:31.366970 | orchestrator | 2026-03-28 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:34.411535 | orchestrator | 2026-03-28 00:57:34 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:34.412142 | orchestrator | 2026-03-28 00:57:34 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:34.412193 | orchestrator | 2026-03-28 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:37.459225 | orchestrator | 2026-03-28 00:57:37 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:37.461073 | orchestrator | 2026-03-28 00:57:37 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:37.461159 | orchestrator | 2026-03-28 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:40.503295 | orchestrator | 2026-03-28 00:57:40 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:40.504743 | orchestrator | 2026-03-28 00:57:40 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:40.504776 | orchestrator | 2026-03-28 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:43.555682 | orchestrator | 2026-03-28 00:57:43 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:43.557626 | orchestrator | 2026-03-28 00:57:43 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:43.557654 | orchestrator | 2026-03-28 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:46.608279 | orchestrator | 2026-03-28 00:57:46 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:46.609664 | orchestrator | 2026-03-28 00:57:46 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:46.609849 | orchestrator | 2026-03-28 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:49.675943 | orchestrator | 2026-03-28 00:57:49 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state STARTED 2026-03-28 00:57:49.676054 | orchestrator | 2026-03-28 00:57:49 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:49.676083 | orchestrator | 2026-03-28 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:52.709706 | orchestrator | 2026-03-28 00:57:52.709890 | orchestrator | 2026-03-28 00:57:52 | INFO  | Task d3b43f4a-4515-4141-94a8-274f43485db6 is in state SUCCESS 2026-03-28 00:57:52.712669 | orchestrator | 2026-03-28 00:57:52.712733 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:57:52.712744 | orchestrator | 2026-03-28 00:57:52.712752 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:57:52.712761 | orchestrator | Saturday 28 March 2026 00:50:43 +0000 (0:00:00.680) 0:00:00.681 ******** 2026-03-28 00:57:52.712773 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.712788 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.712801 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.712814 | orchestrator | 2026-03-28 00:57:52.712826 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:57:52.712840 | orchestrator | Saturday 28 March 2026 00:50:44 +0000 (0:00:00.663) 0:00:01.344 ******** 2026-03-28 00:57:52.712855 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-28 00:57:52.712871 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-28 00:57:52.712886 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-28 00:57:52.712901 | orchestrator | 2026-03-28 00:57:52.712915 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-28 00:57:52.712928 | orchestrator | 2026-03-28 00:57:52.712937 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-28 00:57:52.712945 | orchestrator | Saturday 28 March 2026 00:50:45 +0000 (0:00:00.956) 0:00:02.301 ******** 2026-03-28 00:57:52.712953 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.712962 | orchestrator | 2026-03-28 00:57:52.712970 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-28 00:57:52.712979 | orchestrator | Saturday 28 March 2026 00:50:46 +0000 (0:00:00.892) 0:00:03.193 ******** 2026-03-28 00:57:52.712987 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.712995 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.713003 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.713713 | orchestrator | 2026-03-28 00:57:52.713760 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-28 00:57:52.713796 | orchestrator | Saturday 28 March 2026 00:50:47 +0000 (0:00:01.292) 0:00:04.486 ******** 2026-03-28 00:57:52.713821 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.713834 | orchestrator | 2026-03-28 00:57:52.713849 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-28 00:57:52.713863 | orchestrator | Saturday 28 March 2026 00:50:48 +0000 (0:00:01.571) 0:00:06.058 ******** 2026-03-28 00:57:52.713877 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.713890 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.713904 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.713918 | orchestrator | 2026-03-28 00:57:52.713933 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-28 00:57:52.713948 | orchestrator | Saturday 28 March 2026 00:50:50 +0000 (0:00:01.259) 0:00:07.318 ******** 2026-03-28 00:57:52.713961 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:57:52.713972 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:57:52.713982 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:57:52.713990 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:57:52.713999 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:57:52.714008 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:57:52.714112 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 00:57:52.714135 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 00:57:52.714144 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 00:57:52.714152 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 00:57:52.714160 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 00:57:52.714167 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 00:57:52.714175 | orchestrator | 2026-03-28 00:57:52.714183 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 00:57:52.714610 | orchestrator | Saturday 28 March 2026 00:50:55 +0000 (0:00:05.679) 0:00:12.997 ******** 2026-03-28 00:57:52.714636 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-28 00:57:52.714652 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-28 00:57:52.714665 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-28 00:57:52.714678 | orchestrator | 2026-03-28 00:57:52.714690 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 00:57:52.714701 | orchestrator | Saturday 28 March 2026 00:50:57 +0000 (0:00:01.425) 0:00:14.423 ******** 2026-03-28 00:57:52.714713 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-28 00:57:52.714725 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-28 00:57:52.714736 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-28 00:57:52.714748 | orchestrator | 2026-03-28 00:57:52.714759 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 00:57:52.714770 | orchestrator | Saturday 28 March 2026 00:50:59 +0000 (0:00:02.088) 0:00:16.512 ******** 2026-03-28 00:57:52.714781 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-28 00:57:52.714789 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.714813 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-28 00:57:52.714820 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.714846 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-28 00:57:52.714853 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.714859 | orchestrator | 2026-03-28 00:57:52.714876 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-28 00:57:52.714883 | orchestrator | Saturday 28 March 2026 00:51:00 +0000 (0:00:01.088) 0:00:17.600 ******** 2026-03-28 00:57:52.714893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.714912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.714920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.715132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.715162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.715179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.715196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:52.715206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:52.715220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:52.715472 | orchestrator | 2026-03-28 00:57:52.715486 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-28 00:57:52.715494 | orchestrator | Saturday 28 March 2026 00:51:03 +0000 (0:00:03.530) 0:00:21.131 ******** 2026-03-28 00:57:52.715501 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.715508 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.715514 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.715521 | orchestrator | 2026-03-28 00:57:52.715528 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-28 00:57:52.715535 | orchestrator | Saturday 28 March 2026 00:51:06 +0000 (0:00:02.475) 0:00:23.606 ******** 2026-03-28 00:57:52.715541 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-28 00:57:52.715548 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-28 00:57:52.715555 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-28 00:57:52.715562 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-28 00:57:52.715568 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-28 00:57:52.715575 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-28 00:57:52.715581 | orchestrator | 2026-03-28 00:57:52.715588 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-28 00:57:52.715595 | orchestrator | Saturday 28 March 2026 00:51:10 +0000 (0:00:04.017) 0:00:27.624 ******** 2026-03-28 00:57:52.715601 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.715608 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.715615 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.715621 | orchestrator | 2026-03-28 00:57:52.715628 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-28 00:57:52.715635 | orchestrator | Saturday 28 March 2026 00:51:12 +0000 (0:00:02.166) 0:00:29.790 ******** 2026-03-28 00:57:52.715642 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.715649 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.715655 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.715670 | orchestrator | 2026-03-28 00:57:52.715677 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-28 00:57:52.715684 | orchestrator | Saturday 28 March 2026 00:51:16 +0000 (0:00:03.548) 0:00:33.338 ******** 2026-03-28 00:57:52.715691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.715709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.715716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.715729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6f2a01dd532026f62415ba555684ef608752eef5', '__omit_place_holder__6f2a01dd532026f62415ba555684ef608752eef5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:57:52.715737 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.715744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.715760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.715772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.715783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6f2a01dd532026f62415ba555684ef608752eef5', '__omit_place_holder__6f2a01dd532026f62415ba555684ef608752eef5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:57:52.715791 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.715798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.715805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.715815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.715823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6f2a01dd532026f62415ba555684ef608752eef5', '__omit_place_holder__6f2a01dd532026f62415ba555684ef608752eef5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:57:52.716046 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.716511 | orchestrator | 2026-03-28 00:57:52.716528 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-28 00:57:52.716536 | orchestrator | Saturday 28 March 2026 00:51:17 +0000 (0:00:01.651) 0:00:34.989 ******** 2026-03-28 00:57:52.716544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.716562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.716706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.716716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.716732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.716741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6f2a01dd532026f62415ba555684ef608752eef5', '__omit_place_holder__6f2a01dd532026f62415ba555684ef608752eef5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:57:52.716757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.716764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.716778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6f2a01dd532026f62415ba555684ef608752eef5', '__omit_place_holder__6f2a01dd532026f62415ba555684ef608752eef5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:57:52.716786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.716797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.716804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6f2a01dd532026f62415ba555684ef608752eef5', '__omit_place_holder__6f2a01dd532026f62415ba555684ef608752eef5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:57:52.716817 | orchestrator | 2026-03-28 00:57:52.716824 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-28 00:57:52.716831 | orchestrator | Saturday 28 March 2026 00:51:20 +0000 (0:00:03.049) 0:00:38.039 ******** 2026-03-28 00:57:52.716838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.716845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.716859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.716867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.716874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.716881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.716922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:52.716937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:52.716964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:52.716976 | orchestrator | 2026-03-28 00:57:52.716987 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-28 00:57:52.716998 | orchestrator | Saturday 28 March 2026 00:51:24 +0000 (0:00:03.412) 0:00:41.451 ******** 2026-03-28 00:57:52.717009 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 00:57:52.717619 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 00:57:52.717643 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 00:57:52.717655 | orchestrator | 2026-03-28 00:57:52.717667 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-28 00:57:52.717679 | orchestrator | Saturday 28 March 2026 00:51:26 +0000 (0:00:02.557) 0:00:44.008 ******** 2026-03-28 00:57:52.717691 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 00:57:52.717703 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 00:57:52.717714 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 00:57:52.717726 | orchestrator | 2026-03-28 00:57:52.717737 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-28 00:57:52.717749 | orchestrator | Saturday 28 March 2026 00:51:32 +0000 (0:00:05.992) 0:00:50.000 ******** 2026-03-28 00:57:52.717760 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.717772 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.717783 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.717795 | orchestrator | 2026-03-28 00:57:52.717806 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-28 00:57:52.717820 | orchestrator | Saturday 28 March 2026 00:51:33 +0000 (0:00:00.744) 0:00:50.745 ******** 2026-03-28 00:57:52.717851 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 00:57:52.717866 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 00:57:52.717877 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 00:57:52.717888 | orchestrator | 2026-03-28 00:57:52.717919 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-28 00:57:52.717933 | orchestrator | Saturday 28 March 2026 00:51:36 +0000 (0:00:03.184) 0:00:53.929 ******** 2026-03-28 00:57:52.717944 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 00:57:52.717955 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 00:57:52.718417 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 00:57:52.718491 | orchestrator | 2026-03-28 00:57:52.718502 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-28 00:57:52.718512 | orchestrator | Saturday 28 March 2026 00:51:39 +0000 (0:00:02.952) 0:00:56.882 ******** 2026-03-28 00:57:52.718522 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-28 00:57:52.718532 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-28 00:57:52.718543 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-28 00:57:52.718552 | orchestrator | 2026-03-28 00:57:52.718562 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-28 00:57:52.718572 | orchestrator | Saturday 28 March 2026 00:51:41 +0000 (0:00:01.903) 0:00:58.786 ******** 2026-03-28 00:57:52.718582 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-28 00:57:52.718592 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-28 00:57:52.718601 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-28 00:57:52.718610 | orchestrator | 2026-03-28 00:57:52.718620 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-28 00:57:52.718629 | orchestrator | Saturday 28 March 2026 00:51:43 +0000 (0:00:01.752) 0:01:00.539 ******** 2026-03-28 00:57:52.718639 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.718648 | orchestrator | 2026-03-28 00:57:52.718657 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-28 00:57:52.718667 | orchestrator | Saturday 28 March 2026 00:51:44 +0000 (0:00:00.955) 0:01:01.495 ******** 2026-03-28 00:57:52.718678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.718703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.718728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.718746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.718756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.718766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.718777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:52.718788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:52.718806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:52.718823 | orchestrator | 2026-03-28 00:57:52.718834 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-28 00:57:52.718844 | orchestrator | Saturday 28 March 2026 00:51:47 +0000 (0:00:03.573) 0:01:05.069 ******** 2026-03-28 00:57:52.718856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.718867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.718874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.718880 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.718887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.718894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.718923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.718930 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.719114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.719665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.719684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.719695 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.719705 | orchestrator | 2026-03-28 00:57:52.719716 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-28 00:57:52.719727 | orchestrator | Saturday 28 March 2026 00:51:48 +0000 (0:00:00.789) 0:01:05.858 ******** 2026-03-28 00:57:52.719739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.719751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.719785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.719797 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.719808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.719821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.719839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.719851 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.719863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.719884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.719892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.719905 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.719912 | orchestrator | 2026-03-28 00:57:52.719918 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-28 00:57:52.719924 | orchestrator | Saturday 28 March 2026 00:51:49 +0000 (0:00:01.280) 0:01:07.139 ******** 2026-03-28 00:57:52.719938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.719945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.719955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.719962 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.720547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.720563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.720584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.720609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.720621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.720632 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.720644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.720655 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.720665 | orchestrator | 2026-03-28 00:57:52.720676 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-28 00:57:52.720695 | orchestrator | Saturday 28 March 2026 00:51:51 +0000 (0:00:01.152) 0:01:08.291 ******** 2026-03-28 00:57:52.720706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.720717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.720735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.720745 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.720756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.720774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.720785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.720806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.720819 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.720829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.720847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.720857 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.720867 | orchestrator | 2026-03-28 00:57:52.720878 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-28 00:57:52.720889 | orchestrator | Saturday 28 March 2026 00:51:52 +0000 (0:00:00.862) 0:01:09.153 ******** 2026-03-28 00:57:52.720899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.720918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.720930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.720951 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.720967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.720980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.720999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.721489 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.721505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.721527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.721539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.721551 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.721562 | orchestrator | 2026-03-28 00:57:52.721573 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-28 00:57:52.721583 | orchestrator | Saturday 28 March 2026 00:51:53 +0000 (0:00:01.065) 0:01:10.219 ******** 2026-03-28 00:57:52.721594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.721614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.721634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.721646 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.721656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.721665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.721685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.721695 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.721705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.721722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.721739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.721748 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.721759 | orchestrator | 2026-03-28 00:57:52.721768 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-28 00:57:52.721777 | orchestrator | Saturday 28 March 2026 00:51:55 +0000 (0:00:02.058) 0:01:12.277 ******** 2026-03-28 00:57:52.721787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.721798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.721816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.721825 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.721834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.721850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.721877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.721888 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.721897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.721906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.721916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.721925 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.721933 | orchestrator | 2026-03-28 00:57:52.721943 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-28 00:57:52.722562 | orchestrator | Saturday 28 March 2026 00:51:56 +0000 (0:00:01.334) 0:01:13.611 ******** 2026-03-28 00:57:52.722578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.722590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.722618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.722627 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.722637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.722646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.722655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.722664 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.722680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:52.722690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:52.722706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:52.722719 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.722729 | orchestrator | 2026-03-28 00:57:52.722739 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-28 00:57:52.722747 | orchestrator | Saturday 28 March 2026 00:51:57 +0000 (0:00:00.944) 0:01:14.556 ******** 2026-03-28 00:57:52.722756 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 00:57:52.722768 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 00:57:52.722778 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 00:57:52.722787 | orchestrator | 2026-03-28 00:57:52.722795 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-28 00:57:52.722803 | orchestrator | Saturday 28 March 2026 00:51:59 +0000 (0:00:02.321) 0:01:16.877 ******** 2026-03-28 00:57:52.722812 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 00:57:52.722823 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 00:57:52.722832 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 00:57:52.722841 | orchestrator | 2026-03-28 00:57:52.722849 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-28 00:57:52.722859 | orchestrator | Saturday 28 March 2026 00:52:01 +0000 (0:00:01.882) 0:01:18.760 ******** 2026-03-28 00:57:52.722869 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 00:57:52.722878 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 00:57:52.722886 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 00:57:52.722895 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 00:57:52.722904 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.722914 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 00:57:52.722923 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.722934 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 00:57:52.722943 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.722952 | orchestrator | 2026-03-28 00:57:52.722960 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-28 00:57:52.722969 | orchestrator | Saturday 28 March 2026 00:52:03 +0000 (0:00:01.541) 0:01:20.301 ******** 2026-03-28 00:57:52.722985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.723001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.723010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:52.723027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.723036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.723045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:52.723054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:52.723078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:52.723089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:52.723098 | orchestrator | 2026-03-28 00:57:52.723106 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-28 00:57:52.723114 | orchestrator | Saturday 28 March 2026 00:52:06 +0000 (0:00:03.375) 0:01:23.677 ******** 2026-03-28 00:57:52.723122 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.723131 | orchestrator | 2026-03-28 00:57:52.723140 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-28 00:57:52.723149 | orchestrator | Saturday 28 March 2026 00:52:07 +0000 (0:00:00.792) 0:01:24.469 ******** 2026-03-28 00:57:52.723164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 00:57:52.723174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.723184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 00:57:52.723207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.723241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 00:57:52.723276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.723291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723311 | orchestrator | 2026-03-28 00:57:52.723321 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-28 00:57:52.723330 | orchestrator | Saturday 28 March 2026 00:52:13 +0000 (0:00:05.874) 0:01:30.344 ******** 2026-03-28 00:57:52.723345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 00:57:52.723354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.723364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723446 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.723460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 00:57:52.723471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.723482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723502 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.723536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 00:57:52.723553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.723568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723589 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.723598 | orchestrator | 2026-03-28 00:57:52.723607 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-28 00:57:52.723615 | orchestrator | Saturday 28 March 2026 00:52:14 +0000 (0:00:01.722) 0:01:32.066 ******** 2026-03-28 00:57:52.723625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-28 00:57:52.723641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-28 00:57:52.723651 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.723661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-28 00:57:52.723670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-28 00:57:52.723678 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.723688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-28 00:57:52.723704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-28 00:57:52.723713 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.723723 | orchestrator | 2026-03-28 00:57:52.723732 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-28 00:57:52.723740 | orchestrator | Saturday 28 March 2026 00:52:17 +0000 (0:00:02.160) 0:01:34.227 ******** 2026-03-28 00:57:52.723749 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.723758 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.723766 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.723776 | orchestrator | 2026-03-28 00:57:52.723784 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-28 00:57:52.723793 | orchestrator | Saturday 28 March 2026 00:52:19 +0000 (0:00:01.934) 0:01:36.162 ******** 2026-03-28 00:57:52.723802 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.723811 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.723820 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.723830 | orchestrator | 2026-03-28 00:57:52.723838 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-28 00:57:52.723848 | orchestrator | Saturday 28 March 2026 00:52:23 +0000 (0:00:04.743) 0:01:40.906 ******** 2026-03-28 00:57:52.723858 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.723867 | orchestrator | 2026-03-28 00:57:52.723877 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-28 00:57:52.723885 | orchestrator | Saturday 28 March 2026 00:52:24 +0000 (0:00:01.176) 0:01:42.082 ******** 2026-03-28 00:57:52.723904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.723916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.723930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.723985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.723995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.724008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.724024 | orchestrator | 2026-03-28 00:57:52.724031 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-28 00:57:52.724039 | orchestrator | Saturday 28 March 2026 00:52:31 +0000 (0:00:06.848) 0:01:48.930 ******** 2026-03-28 00:57:52.724047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.724056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.724071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.724079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.724095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.724104 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.724112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.724120 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.724129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.724142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.724151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.724160 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.724169 | orchestrator | 2026-03-28 00:57:52.724178 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-28 00:57:52.724186 | orchestrator | Saturday 28 March 2026 00:52:32 +0000 (0:00:00.643) 0:01:49.574 ******** 2026-03-28 00:57:52.724196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 00:57:52.724211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 00:57:52.724220 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.724232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 00:57:52.724241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 00:57:52.724248 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.724256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 00:57:52.724265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 00:57:52.724274 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.724282 | orchestrator | 2026-03-28 00:57:52.724289 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-28 00:57:52.724298 | orchestrator | Saturday 28 March 2026 00:52:33 +0000 (0:00:01.519) 0:01:51.093 ******** 2026-03-28 00:57:52.724306 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.724314 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.724323 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.724331 | orchestrator | 2026-03-28 00:57:52.724339 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-28 00:57:52.724347 | orchestrator | Saturday 28 March 2026 00:52:35 +0000 (0:00:01.391) 0:01:52.484 ******** 2026-03-28 00:57:52.724356 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.724364 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.724395 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.724403 | orchestrator | 2026-03-28 00:57:52.724411 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-28 00:57:52.724419 | orchestrator | Saturday 28 March 2026 00:52:37 +0000 (0:00:02.298) 0:01:54.783 ******** 2026-03-28 00:57:52.724426 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.724434 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.724442 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.724451 | orchestrator | 2026-03-28 00:57:52.724459 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-28 00:57:52.724468 | orchestrator | Saturday 28 March 2026 00:52:38 +0000 (0:00:00.360) 0:01:55.144 ******** 2026-03-28 00:57:52.724477 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.724484 | orchestrator | 2026-03-28 00:57:52.724494 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-28 00:57:52.724503 | orchestrator | Saturday 28 March 2026 00:52:39 +0000 (0:00:01.062) 0:01:56.206 ******** 2026-03-28 00:57:52.724530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 00:57:52.724548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 00:57:52.724561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 00:57:52.724570 | orchestrator | 2026-03-28 00:57:52.724578 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-28 00:57:52.724586 | orchestrator | Saturday 28 March 2026 00:52:43 +0000 (0:00:04.127) 0:02:00.334 ******** 2026-03-28 00:57:52.724594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 00:57:52.724603 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.724612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 00:57:52.724631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 00:57:52.724640 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.724648 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.724656 | orchestrator | 2026-03-28 00:57:52.724664 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-28 00:57:52.724672 | orchestrator | Saturday 28 March 2026 00:52:45 +0000 (0:00:02.661) 0:02:02.996 ******** 2026-03-28 00:57:52.724687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:57:52.724703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:57:52.724714 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.724722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:57:52.724731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:57:52.724793 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.724829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:57:52.724838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:57:52.724853 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.724860 | orchestrator | 2026-03-28 00:57:52.724868 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-28 00:57:52.724877 | orchestrator | Saturday 28 March 2026 00:52:48 +0000 (0:00:02.617) 0:02:05.613 ******** 2026-03-28 00:57:52.724884 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.724892 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.724902 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.724909 | orchestrator | 2026-03-28 00:57:52.724917 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-28 00:57:52.724925 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:00.903) 0:02:06.517 ******** 2026-03-28 00:57:52.724933 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.724942 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.724951 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.724959 | orchestrator | 2026-03-28 00:57:52.724967 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-28 00:57:52.724981 | orchestrator | Saturday 28 March 2026 00:52:51 +0000 (0:00:01.911) 0:02:08.429 ******** 2026-03-28 00:57:52.724988 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.724995 | orchestrator | 2026-03-28 00:57:52.725002 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-28 00:57:52.725009 | orchestrator | Saturday 28 March 2026 00:52:52 +0000 (0:00:00.781) 0:02:09.210 ******** 2026-03-28 00:57:52.725017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.725031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.725073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.725115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725144 | orchestrator | 2026-03-28 00:57:52.725151 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-28 00:57:52.725159 | orchestrator | Saturday 28 March 2026 00:52:57 +0000 (0:00:05.254) 0:02:14.464 ******** 2026-03-28 00:57:52.725170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.725178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725217 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.725320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.725337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725398 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.725407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.725423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725460 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.725469 | orchestrator | 2026-03-28 00:57:52.725477 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-28 00:57:52.725486 | orchestrator | Saturday 28 March 2026 00:52:58 +0000 (0:00:01.397) 0:02:15.862 ******** 2026-03-28 00:57:52.725496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 00:57:52.725506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 00:57:52.725514 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.725523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 00:57:52.725531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 00:57:52.725539 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.725547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 00:57:52.725557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 00:57:52.725565 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.725574 | orchestrator | 2026-03-28 00:57:52.725582 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-28 00:57:52.725590 | orchestrator | Saturday 28 March 2026 00:52:59 +0000 (0:00:01.133) 0:02:16.995 ******** 2026-03-28 00:57:52.725611 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.725620 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.725629 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.725636 | orchestrator | 2026-03-28 00:57:52.725643 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-28 00:57:52.725651 | orchestrator | Saturday 28 March 2026 00:53:01 +0000 (0:00:01.407) 0:02:18.402 ******** 2026-03-28 00:57:52.725659 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.725667 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.725675 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.725684 | orchestrator | 2026-03-28 00:57:52.725695 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-28 00:57:52.725703 | orchestrator | Saturday 28 March 2026 00:53:03 +0000 (0:00:02.144) 0:02:20.547 ******** 2026-03-28 00:57:52.725711 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.725719 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.725728 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.725736 | orchestrator | 2026-03-28 00:57:52.725743 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-28 00:57:52.725751 | orchestrator | Saturday 28 March 2026 00:53:03 +0000 (0:00:00.542) 0:02:21.089 ******** 2026-03-28 00:57:52.725759 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.725767 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.725775 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.725784 | orchestrator | 2026-03-28 00:57:52.725792 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-28 00:57:52.725801 | orchestrator | Saturday 28 March 2026 00:53:04 +0000 (0:00:00.423) 0:02:21.512 ******** 2026-03-28 00:57:52.725811 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.725825 | orchestrator | 2026-03-28 00:57:52.725832 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-28 00:57:52.725841 | orchestrator | Saturday 28 March 2026 00:53:05 +0000 (0:00:00.798) 0:02:22.311 ******** 2026-03-28 00:57:52.725853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 00:57:52.725864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:57:52.725873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 00:57:52.725945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 00:57:52.725953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:57:52.725966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.725981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:57:52.725994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726135 | orchestrator | 2026-03-28 00:57:52.726144 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-28 00:57:52.726154 | orchestrator | Saturday 28 March 2026 00:53:09 +0000 (0:00:04.376) 0:02:26.688 ******** 2026-03-28 00:57:52.726163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 00:57:52.726176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:57:52.726192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 00:57:52.726260 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.726284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:57:52.726294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726350 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.726365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 00:57:52.726401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:57:52.726414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.726469 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.726478 | orchestrator | 2026-03-28 00:57:52.726486 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-28 00:57:52.726493 | orchestrator | Saturday 28 March 2026 00:53:10 +0000 (0:00:00.931) 0:02:27.619 ******** 2026-03-28 00:57:52.726504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-28 00:57:52.726512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-28 00:57:52.726521 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.726529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-28 00:57:52.726537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-28 00:57:52.726545 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.726554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-28 00:57:52.726566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-28 00:57:52.726573 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.726581 | orchestrator | 2026-03-28 00:57:52.726589 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-28 00:57:52.726598 | orchestrator | Saturday 28 March 2026 00:53:11 +0000 (0:00:01.069) 0:02:28.689 ******** 2026-03-28 00:57:52.726606 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.726614 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.726622 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.726631 | orchestrator | 2026-03-28 00:57:52.726639 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-28 00:57:52.726647 | orchestrator | Saturday 28 March 2026 00:53:13 +0000 (0:00:01.991) 0:02:30.681 ******** 2026-03-28 00:57:52.726655 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.726736 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.726746 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.726754 | orchestrator | 2026-03-28 00:57:52.726763 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-28 00:57:52.726772 | orchestrator | Saturday 28 March 2026 00:53:15 +0000 (0:00:01.906) 0:02:32.587 ******** 2026-03-28 00:57:52.726780 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.726788 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.726798 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.726806 | orchestrator | 2026-03-28 00:57:52.726815 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-28 00:57:52.726830 | orchestrator | Saturday 28 March 2026 00:53:16 +0000 (0:00:00.591) 0:02:33.179 ******** 2026-03-28 00:57:52.726840 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.726849 | orchestrator | 2026-03-28 00:57:52.726856 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-28 00:57:52.726864 | orchestrator | Saturday 28 March 2026 00:53:16 +0000 (0:00:00.866) 0:02:34.045 ******** 2026-03-28 00:57:52.726885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 00:57:52.726897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:57:52.726913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 00:57:52.726954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 00:57:52.726966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:57:52.726988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:57:52.726998 | orchestrator | 2026-03-28 00:57:52.727007 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-28 00:57:52.727015 | orchestrator | Saturday 28 March 2026 00:53:21 +0000 (0:00:04.737) 0:02:38.783 ******** 2026-03-28 00:57:52.727028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 00:57:52.727048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:57:52.727058 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.727074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 00:57:52.727095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:57:52.727105 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.727122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 00:57:52.727144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:57:52.727154 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.727162 | orchestrator | 2026-03-28 00:57:52.727170 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-28 00:57:52.727179 | orchestrator | Saturday 28 March 2026 00:53:24 +0000 (0:00:03.291) 0:02:42.074 ******** 2026-03-28 00:57:52.727188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:57:52.727197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:57:52.727206 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.727221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:57:52.727236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:57:52.727245 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.727253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:57:52.727263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:57:52.727272 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.727281 | orchestrator | 2026-03-28 00:57:52.727289 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-28 00:57:52.727298 | orchestrator | Saturday 28 March 2026 00:53:28 +0000 (0:00:03.552) 0:02:45.627 ******** 2026-03-28 00:57:52.727307 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.727315 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.727323 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.727331 | orchestrator | 2026-03-28 00:57:52.727340 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-28 00:57:52.727349 | orchestrator | Saturday 28 March 2026 00:53:29 +0000 (0:00:01.435) 0:02:47.062 ******** 2026-03-28 00:57:52.727357 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.727364 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.727428 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.727436 | orchestrator | 2026-03-28 00:57:52.727452 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-28 00:57:52.727461 | orchestrator | Saturday 28 March 2026 00:53:32 +0000 (0:00:02.258) 0:02:49.321 ******** 2026-03-28 00:57:52.727469 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.727477 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.727488 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.727496 | orchestrator | 2026-03-28 00:57:52.727504 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-28 00:57:52.727512 | orchestrator | Saturday 28 March 2026 00:53:32 +0000 (0:00:00.583) 0:02:49.905 ******** 2026-03-28 00:57:52.727521 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.727529 | orchestrator | 2026-03-28 00:57:52.727538 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-28 00:57:52.727547 | orchestrator | Saturday 28 March 2026 00:53:33 +0000 (0:00:00.970) 0:02:50.875 ******** 2026-03-28 00:57:52.727565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 00:57:52.727581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 00:57:52.727591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 00:57:52.727600 | orchestrator | 2026-03-28 00:57:52.727608 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-28 00:57:52.727616 | orchestrator | Saturday 28 March 2026 00:53:37 +0000 (0:00:03.418) 0:02:54.293 ******** 2026-03-28 00:57:52.727627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 00:57:52.727641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 00:57:52.727648 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.727655 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.727664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 00:57:52.727679 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.727687 | orchestrator | 2026-03-28 00:57:52.727697 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-28 00:57:52.727706 | orchestrator | Saturday 28 March 2026 00:53:37 +0000 (0:00:00.816) 0:02:55.110 ******** 2026-03-28 00:57:52.727714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-28 00:57:52.727727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-28 00:57:52.727735 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.727744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-28 00:57:52.727751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-28 00:57:52.727760 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.727769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-28 00:57:52.727778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-28 00:57:52.727786 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.727794 | orchestrator | 2026-03-28 00:57:52.727802 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-28 00:57:52.727812 | orchestrator | Saturday 28 March 2026 00:53:38 +0000 (0:00:00.677) 0:02:55.787 ******** 2026-03-28 00:57:52.727820 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.727828 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.727836 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.727845 | orchestrator | 2026-03-28 00:57:52.727854 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-28 00:57:52.727862 | orchestrator | Saturday 28 March 2026 00:53:40 +0000 (0:00:01.466) 0:02:57.254 ******** 2026-03-28 00:57:52.727870 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.727878 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.727886 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.727893 | orchestrator | 2026-03-28 00:57:52.727902 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-28 00:57:52.727909 | orchestrator | Saturday 28 March 2026 00:53:42 +0000 (0:00:02.337) 0:02:59.591 ******** 2026-03-28 00:57:52.727917 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.727924 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.727932 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.727939 | orchestrator | 2026-03-28 00:57:52.727945 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-28 00:57:52.727952 | orchestrator | Saturday 28 March 2026 00:53:43 +0000 (0:00:00.626) 0:03:00.218 ******** 2026-03-28 00:57:52.727966 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.727975 | orchestrator | 2026-03-28 00:57:52.727983 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-28 00:57:52.727990 | orchestrator | Saturday 28 March 2026 00:53:44 +0000 (0:00:01.039) 0:03:01.258 ******** 2026-03-28 00:57:52.728012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 00:57:52.728022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 00:57:52.728046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 00:57:52.728056 | orchestrator | 2026-03-28 00:57:52.728064 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-28 00:57:52.728072 | orchestrator | Saturday 28 March 2026 00:53:48 +0000 (0:00:04.693) 0:03:05.952 ******** 2026-03-28 00:57:52.728086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 00:57:52.728101 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.728115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 00:57:52.728124 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.728138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 00:57:52.728153 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.728161 | orchestrator | 2026-03-28 00:57:52.728169 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-28 00:57:52.728177 | orchestrator | Saturday 28 March 2026 00:53:50 +0000 (0:00:01.393) 0:03:07.346 ******** 2026-03-28 00:57:52.728186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 00:57:52.728196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:57:52.728211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 00:57:52.728220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 00:57:52.728228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:57:52.728237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:57:52.728256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 00:57:52.728264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 00:57:52.728272 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.728280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:57:52.728289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 00:57:52.728296 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.728308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 00:57:52.728316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:57:52.728323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 00:57:52.728331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:57:52.728339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 00:57:52.728348 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.728355 | orchestrator | 2026-03-28 00:57:52.728362 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-28 00:57:52.728400 | orchestrator | Saturday 28 March 2026 00:53:51 +0000 (0:00:01.040) 0:03:08.386 ******** 2026-03-28 00:57:52.728409 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.728416 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.728424 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.728432 | orchestrator | 2026-03-28 00:57:52.728440 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-28 00:57:52.728450 | orchestrator | Saturday 28 March 2026 00:53:52 +0000 (0:00:01.366) 0:03:09.753 ******** 2026-03-28 00:57:52.728457 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.728464 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.728471 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.728479 | orchestrator | 2026-03-28 00:57:52.728487 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-28 00:57:52.728502 | orchestrator | Saturday 28 March 2026 00:53:54 +0000 (0:00:02.089) 0:03:11.843 ******** 2026-03-28 00:57:52.728510 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.728518 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.728526 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.728535 | orchestrator | 2026-03-28 00:57:52.728542 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-28 00:57:52.728551 | orchestrator | Saturday 28 March 2026 00:53:55 +0000 (0:00:00.372) 0:03:12.215 ******** 2026-03-28 00:57:52.728559 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.728567 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.728575 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.728582 | orchestrator | 2026-03-28 00:57:52.728589 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-28 00:57:52.728598 | orchestrator | Saturday 28 March 2026 00:53:55 +0000 (0:00:00.602) 0:03:12.817 ******** 2026-03-28 00:57:52.728608 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.728616 | orchestrator | 2026-03-28 00:57:52.728625 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-28 00:57:52.728633 | orchestrator | Saturday 28 March 2026 00:53:56 +0000 (0:00:01.238) 0:03:14.056 ******** 2026-03-28 00:57:52.728641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 00:57:52.728833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:57:52.728850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:57:52.728866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 00:57:52.728884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:57:52.728892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:57:52.728909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 00:57:52.728917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:57:52.728928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:57:52.728943 | orchestrator | 2026-03-28 00:57:52.728951 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-28 00:57:52.728958 | orchestrator | Saturday 28 March 2026 00:54:01 +0000 (0:00:04.904) 0:03:18.960 ******** 2026-03-28 00:57:52.728966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 00:57:52.728975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:57:52.728983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:57:52.728991 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.729004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 00:57:52.729020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:57:52.729028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:57:52.729035 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.729043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 00:57:52.729051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:57:52.729130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:57:52.729141 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.729150 | orchestrator | 2026-03-28 00:57:52.729157 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-28 00:57:52.729165 | orchestrator | Saturday 28 March 2026 00:54:02 +0000 (0:00:00.984) 0:03:19.944 ******** 2026-03-28 00:57:52.729199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 00:57:52.729215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 00:57:52.729224 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.729232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 00:57:52.729244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 00:57:52.729253 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.729272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 00:57:52.729280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 00:57:52.729289 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.729297 | orchestrator | 2026-03-28 00:57:52.729304 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-28 00:57:52.729313 | orchestrator | Saturday 28 March 2026 00:54:03 +0000 (0:00:00.867) 0:03:20.811 ******** 2026-03-28 00:57:52.729322 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.729329 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.729336 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.729344 | orchestrator | 2026-03-28 00:57:52.729352 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-28 00:57:52.729360 | orchestrator | Saturday 28 March 2026 00:54:05 +0000 (0:00:01.419) 0:03:22.231 ******** 2026-03-28 00:57:52.729417 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.729428 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.729436 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.729445 | orchestrator | 2026-03-28 00:57:52.729454 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-28 00:57:52.729462 | orchestrator | Saturday 28 March 2026 00:54:07 +0000 (0:00:02.257) 0:03:24.488 ******** 2026-03-28 00:57:52.729471 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.729480 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.729490 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.729497 | orchestrator | 2026-03-28 00:57:52.729505 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-28 00:57:52.729511 | orchestrator | Saturday 28 March 2026 00:54:07 +0000 (0:00:00.620) 0:03:25.108 ******** 2026-03-28 00:57:52.729520 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.729527 | orchestrator | 2026-03-28 00:57:52.729534 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-28 00:57:52.729545 | orchestrator | Saturday 28 March 2026 00:54:08 +0000 (0:00:01.030) 0:03:26.139 ******** 2026-03-28 00:57:52.729559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 00:57:52.729577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 00:57:52.729591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.729600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.729608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 00:57:52.729626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.729635 | orchestrator | 2026-03-28 00:57:52.729643 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-28 00:57:52.729651 | orchestrator | Saturday 28 March 2026 00:54:13 +0000 (0:00:04.030) 0:03:30.170 ******** 2026-03-28 00:57:52.729667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 00:57:52.729676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.729684 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.729693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 00:57:52.729701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.729715 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.729727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 00:57:52.729739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.729747 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.729754 | orchestrator | 2026-03-28 00:57:52.729762 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-28 00:57:52.729769 | orchestrator | Saturday 28 March 2026 00:54:14 +0000 (0:00:01.121) 0:03:31.291 ******** 2026-03-28 00:57:52.729776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-28 00:57:52.729784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-28 00:57:52.729792 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.729799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-28 00:57:52.729806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-28 00:57:52.729813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-28 00:57:52.729820 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.729827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-28 00:57:52.729839 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.729845 | orchestrator | 2026-03-28 00:57:52.729853 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-28 00:57:52.729860 | orchestrator | Saturday 28 March 2026 00:54:15 +0000 (0:00:01.143) 0:03:32.435 ******** 2026-03-28 00:57:52.729868 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.729875 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.729882 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.729888 | orchestrator | 2026-03-28 00:57:52.729895 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-28 00:57:52.729902 | orchestrator | Saturday 28 March 2026 00:54:16 +0000 (0:00:01.555) 0:03:33.991 ******** 2026-03-28 00:57:52.729909 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.729916 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.729922 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.729930 | orchestrator | 2026-03-28 00:57:52.729937 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-28 00:57:52.729943 | orchestrator | Saturday 28 March 2026 00:54:19 +0000 (0:00:02.473) 0:03:36.465 ******** 2026-03-28 00:57:52.729951 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.729958 | orchestrator | 2026-03-28 00:57:52.729965 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-28 00:57:52.729973 | orchestrator | Saturday 28 March 2026 00:54:21 +0000 (0:00:01.821) 0:03:38.287 ******** 2026-03-28 00:57:52.729986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 00:57:52.729995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 00:57:52.730080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 00:57:52.730122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730150 | orchestrator | 2026-03-28 00:57:52.730157 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-28 00:57:52.730164 | orchestrator | Saturday 28 March 2026 00:54:24 +0000 (0:00:03.853) 0:03:42.140 ******** 2026-03-28 00:57:52.730172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 00:57:52.730184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 00:57:52.730222 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.730234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730261 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.730275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 00:57:52.730282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.730311 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.730319 | orchestrator | 2026-03-28 00:57:52.730327 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-28 00:57:52.730335 | orchestrator | Saturday 28 March 2026 00:54:25 +0000 (0:00:00.735) 0:03:42.876 ******** 2026-03-28 00:57:52.730343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-28 00:57:52.730352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-28 00:57:52.730360 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.730383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-28 00:57:52.730391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-28 00:57:52.730403 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.730415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-28 00:57:52.730422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-28 00:57:52.730429 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.730436 | orchestrator | 2026-03-28 00:57:52.730443 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-28 00:57:52.730451 | orchestrator | Saturday 28 March 2026 00:54:27 +0000 (0:00:01.519) 0:03:44.395 ******** 2026-03-28 00:57:52.730458 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.730464 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.730471 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.730478 | orchestrator | 2026-03-28 00:57:52.730484 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-28 00:57:52.730544 | orchestrator | Saturday 28 March 2026 00:54:28 +0000 (0:00:01.410) 0:03:45.805 ******** 2026-03-28 00:57:52.730553 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.730561 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.730568 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.730575 | orchestrator | 2026-03-28 00:57:52.730582 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-28 00:57:52.730588 | orchestrator | Saturday 28 March 2026 00:54:30 +0000 (0:00:02.179) 0:03:47.985 ******** 2026-03-28 00:57:52.730595 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.730602 | orchestrator | 2026-03-28 00:57:52.730610 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-28 00:57:52.730617 | orchestrator | Saturday 28 March 2026 00:54:32 +0000 (0:00:01.200) 0:03:49.185 ******** 2026-03-28 00:57:52.730624 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 00:57:52.730631 | orchestrator | 2026-03-28 00:57:52.730637 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-28 00:57:52.730645 | orchestrator | Saturday 28 March 2026 00:54:34 +0000 (0:00:02.866) 0:03:52.052 ******** 2026-03-28 00:57:52.730669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:57:52.730687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:57:52.730694 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.730728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:57:52.730737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:57:52.730744 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.730762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:57:52.730775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:57:52.730783 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.730790 | orchestrator | 2026-03-28 00:57:52.730798 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-28 00:57:52.730805 | orchestrator | Saturday 28 March 2026 00:54:37 +0000 (0:00:02.452) 0:03:54.504 ******** 2026-03-28 00:57:52.730817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:57:52.730830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:57:52.730838 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.730850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:57:52.730858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:57:52.730867 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.730879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:57:52.730897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:57:52.730905 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.730913 | orchestrator | 2026-03-28 00:57:52.730920 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-28 00:57:52.730927 | orchestrator | Saturday 28 March 2026 00:54:39 +0000 (0:00:02.503) 0:03:57.007 ******** 2026-03-28 00:57:52.730935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:57:52.730944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:57:52.730951 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.730958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:57:52.730974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:57:52.730983 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.730990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:57:52.730997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:57:52.731008 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.731015 | orchestrator | 2026-03-28 00:57:52.731063 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-28 00:57:52.731072 | orchestrator | Saturday 28 March 2026 00:54:42 +0000 (0:00:03.004) 0:04:00.011 ******** 2026-03-28 00:57:52.731080 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.731115 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.731122 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.731130 | orchestrator | 2026-03-28 00:57:52.731137 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-28 00:57:52.731145 | orchestrator | Saturday 28 March 2026 00:54:44 +0000 (0:00:01.908) 0:04:01.920 ******** 2026-03-28 00:57:52.731151 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.731158 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.731165 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.731173 | orchestrator | 2026-03-28 00:57:52.731180 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-28 00:57:52.731187 | orchestrator | Saturday 28 March 2026 00:54:46 +0000 (0:00:01.553) 0:04:03.474 ******** 2026-03-28 00:57:52.731194 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.731202 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.731208 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.731214 | orchestrator | 2026-03-28 00:57:52.731221 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-28 00:57:52.731228 | orchestrator | Saturday 28 March 2026 00:54:46 +0000 (0:00:00.356) 0:04:03.830 ******** 2026-03-28 00:57:52.731236 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.731243 | orchestrator | 2026-03-28 00:57:52.731250 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-28 00:57:52.731257 | orchestrator | Saturday 28 March 2026 00:54:48 +0000 (0:00:01.404) 0:04:05.235 ******** 2026-03-28 00:57:52.731265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 00:57:52.731287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 00:57:52.731295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 00:57:52.731303 | orchestrator | 2026-03-28 00:57:52.731310 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-28 00:57:52.731317 | orchestrator | Saturday 28 March 2026 00:54:49 +0000 (0:00:01.437) 0:04:06.672 ******** 2026-03-28 00:57:52.731329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 00:57:52.731336 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.731344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 00:57:52.731356 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.731364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 00:57:52.731462 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.731471 | orchestrator | 2026-03-28 00:57:52.731477 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-28 00:57:52.731485 | orchestrator | Saturday 28 March 2026 00:54:49 +0000 (0:00:00.416) 0:04:07.088 ******** 2026-03-28 00:57:52.731493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 00:57:52.731506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 00:57:52.731512 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.731519 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.731526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 00:57:52.731534 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.731541 | orchestrator | 2026-03-28 00:57:52.731548 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-28 00:57:52.731555 | orchestrator | Saturday 28 March 2026 00:54:50 +0000 (0:00:00.951) 0:04:08.040 ******** 2026-03-28 00:57:52.731561 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.731568 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.731575 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.731582 | orchestrator | 2026-03-28 00:57:52.731589 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-28 00:57:52.731596 | orchestrator | Saturday 28 March 2026 00:54:51 +0000 (0:00:00.645) 0:04:08.686 ******** 2026-03-28 00:57:52.731602 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.731609 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.731616 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.731622 | orchestrator | 2026-03-28 00:57:52.731629 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-28 00:57:52.731635 | orchestrator | Saturday 28 March 2026 00:54:52 +0000 (0:00:01.377) 0:04:10.063 ******** 2026-03-28 00:57:52.731642 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.731649 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.731657 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.731663 | orchestrator | 2026-03-28 00:57:52.731676 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-28 00:57:52.731684 | orchestrator | Saturday 28 March 2026 00:54:53 +0000 (0:00:00.351) 0:04:10.414 ******** 2026-03-28 00:57:52.731691 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.731704 | orchestrator | 2026-03-28 00:57:52.731711 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-28 00:57:52.731718 | orchestrator | Saturday 28 March 2026 00:54:54 +0000 (0:00:01.495) 0:04:11.910 ******** 2026-03-28 00:57:52.731725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 00:57:52.731733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.731746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.731753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.731765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 00:57:52.731782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.731792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.731800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.731808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.731820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:52.731828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.731843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 00:57:52.731852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.731858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.731868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:57:52.731876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:52.731887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 00:57:52.731900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.731907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.731915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.731926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 00:57:52.731933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.731952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.731960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 00:57:52.731967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.731975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.731985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.731994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:52.732015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 00:57:52.732041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 00:57:52.732051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.732071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.732078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.732097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:57:52.732105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:52.732130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:52.732137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 00:57:52.732156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.732164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:57:52.732191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:52.732199 | orchestrator | 2026-03-28 00:57:52.732205 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-28 00:57:52.732213 | orchestrator | Saturday 28 March 2026 00:54:59 +0000 (0:00:04.628) 0:04:16.538 ******** 2026-03-28 00:57:52.732220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 00:57:52.732230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 00:57:52.732245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 00:57:52.732556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 00:57:52.732571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.732636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.732663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.732676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:52.732692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.732700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 00:57:52.732767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.732774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 00:57:52.732793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:52.732853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:57:52.732877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:52.732891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.732955 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.733073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.733085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 00:57:52.733099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 00:57:52.733106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.733113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.733120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.733134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.733188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.733197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:57:52.733209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.733216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:52.733223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:52.733237 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.733244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.733269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 00:57:52.733278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:52.733294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.733302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:57:52.733310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:52.733323 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.733398 | orchestrator | 2026-03-28 00:57:52.733408 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-28 00:57:52.733414 | orchestrator | Saturday 28 March 2026 00:55:01 +0000 (0:00:01.757) 0:04:18.296 ******** 2026-03-28 00:57:52.733421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-28 00:57:52.733429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-28 00:57:52.733435 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.733466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-28 00:57:52.733473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-28 00:57:52.733481 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.733486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-28 00:57:52.733492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-28 00:57:52.733498 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.733503 | orchestrator | 2026-03-28 00:57:52.733509 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-28 00:57:52.733515 | orchestrator | Saturday 28 March 2026 00:55:03 +0000 (0:00:02.721) 0:04:21.017 ******** 2026-03-28 00:57:52.733522 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.733527 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.733533 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.733539 | orchestrator | 2026-03-28 00:57:52.733545 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-28 00:57:52.733551 | orchestrator | Saturday 28 March 2026 00:55:05 +0000 (0:00:01.425) 0:04:22.442 ******** 2026-03-28 00:57:52.733558 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.733564 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.733570 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.733575 | orchestrator | 2026-03-28 00:57:52.733582 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-28 00:57:52.733588 | orchestrator | Saturday 28 March 2026 00:55:07 +0000 (0:00:02.527) 0:04:24.969 ******** 2026-03-28 00:57:52.733601 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.733607 | orchestrator | 2026-03-28 00:57:52.733614 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-28 00:57:52.733620 | orchestrator | Saturday 28 March 2026 00:55:09 +0000 (0:00:01.354) 0:04:26.324 ******** 2026-03-28 00:57:52.733628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.733642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.733671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.733678 | orchestrator | 2026-03-28 00:57:52.733685 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-28 00:57:52.733691 | orchestrator | Saturday 28 March 2026 00:55:13 +0000 (0:00:03.895) 0:04:30.220 ******** 2026-03-28 00:57:52.733698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.733705 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.733717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.733728 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.733735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.733741 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.733748 | orchestrator | 2026-03-28 00:57:52.733756 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-28 00:57:52.733763 | orchestrator | Saturday 28 March 2026 00:55:13 +0000 (0:00:00.625) 0:04:30.845 ******** 2026-03-28 00:57:52.733770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 00:57:52.733780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 00:57:52.733788 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.733816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 00:57:52.733825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 00:57:52.733833 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.733842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 00:57:52.733850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 00:57:52.733857 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.733866 | orchestrator | 2026-03-28 00:57:52.733873 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-28 00:57:52.733881 | orchestrator | Saturday 28 March 2026 00:55:14 +0000 (0:00:01.252) 0:04:32.098 ******** 2026-03-28 00:57:52.733895 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.733904 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.733912 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.733919 | orchestrator | 2026-03-28 00:57:52.733927 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-28 00:57:52.733934 | orchestrator | Saturday 28 March 2026 00:55:16 +0000 (0:00:01.482) 0:04:33.581 ******** 2026-03-28 00:57:52.733941 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.733948 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.733955 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.733961 | orchestrator | 2026-03-28 00:57:52.733971 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-28 00:57:52.733979 | orchestrator | Saturday 28 March 2026 00:55:18 +0000 (0:00:02.199) 0:04:35.781 ******** 2026-03-28 00:57:52.733986 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.733994 | orchestrator | 2026-03-28 00:57:52.734001 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-28 00:57:52.734009 | orchestrator | Saturday 28 March 2026 00:55:20 +0000 (0:00:01.690) 0:04:37.472 ******** 2026-03-28 00:57:52.734044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.734055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.734086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.734096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.734116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.734124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.734132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.734161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.734174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.734181 | orchestrator | 2026-03-28 00:57:52.734189 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-28 00:57:52.734196 | orchestrator | Saturday 28 March 2026 00:55:25 +0000 (0:00:05.187) 0:04:42.660 ******** 2026-03-28 00:57:52.734205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.734212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.734219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.734226 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.734273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.734289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.734297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.734303 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.734310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.734318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.734350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.734357 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.734364 | orchestrator | 2026-03-28 00:57:52.734392 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-28 00:57:52.734398 | orchestrator | Saturday 28 March 2026 00:55:27 +0000 (0:00:01.496) 0:04:44.156 ******** 2026-03-28 00:57:52.734404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 00:57:52.734412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 00:57:52.734419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 00:57:52.734432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 00:57:52.734440 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.734447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 00:57:52.734454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 00:57:52.734461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 00:57:52.734468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 00:57:52.734475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 00:57:52.734481 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.734488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 00:57:52.734495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 00:57:52.734501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 00:57:52.734514 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.734521 | orchestrator | 2026-03-28 00:57:52.734529 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-28 00:57:52.734536 | orchestrator | Saturday 28 March 2026 00:55:28 +0000 (0:00:01.015) 0:04:45.172 ******** 2026-03-28 00:57:52.734542 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.734548 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.734555 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.734561 | orchestrator | 2026-03-28 00:57:52.734568 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-28 00:57:52.734574 | orchestrator | Saturday 28 March 2026 00:55:29 +0000 (0:00:01.470) 0:04:46.642 ******** 2026-03-28 00:57:52.734581 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.734589 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.734596 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.734603 | orchestrator | 2026-03-28 00:57:52.734632 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-28 00:57:52.734639 | orchestrator | Saturday 28 March 2026 00:55:31 +0000 (0:00:02.175) 0:04:48.818 ******** 2026-03-28 00:57:52.734646 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.734653 | orchestrator | 2026-03-28 00:57:52.734660 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-28 00:57:52.734667 | orchestrator | Saturday 28 March 2026 00:55:33 +0000 (0:00:01.582) 0:04:50.401 ******** 2026-03-28 00:57:52.734674 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-28 00:57:52.734681 | orchestrator | 2026-03-28 00:57:52.734688 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-28 00:57:52.734695 | orchestrator | Saturday 28 March 2026 00:55:34 +0000 (0:00:00.855) 0:04:51.256 ******** 2026-03-28 00:57:52.734703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 00:57:52.734714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 00:57:52.734721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 00:57:52.734729 | orchestrator | 2026-03-28 00:57:52.734735 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-28 00:57:52.734742 | orchestrator | Saturday 28 March 2026 00:55:38 +0000 (0:00:04.262) 0:04:55.519 ******** 2026-03-28 00:57:52.734749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:52.734761 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.734768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:52.734774 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.734781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:52.734788 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.734794 | orchestrator | 2026-03-28 00:57:52.734819 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-28 00:57:52.734826 | orchestrator | Saturday 28 March 2026 00:55:39 +0000 (0:00:01.497) 0:04:57.016 ******** 2026-03-28 00:57:52.734833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:57:52.734839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:57:52.734847 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.734854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:57:52.734861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:57:52.734869 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.734876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:57:52.734887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:57:52.734894 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.734900 | orchestrator | 2026-03-28 00:57:52.734907 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 00:57:52.734915 | orchestrator | Saturday 28 March 2026 00:55:41 +0000 (0:00:01.733) 0:04:58.750 ******** 2026-03-28 00:57:52.734921 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.734927 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.734938 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.734945 | orchestrator | 2026-03-28 00:57:52.734951 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 00:57:52.734958 | orchestrator | Saturday 28 March 2026 00:55:44 +0000 (0:00:02.571) 0:05:01.321 ******** 2026-03-28 00:57:52.734965 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.734971 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.734977 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.734985 | orchestrator | 2026-03-28 00:57:52.734992 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-28 00:57:52.734998 | orchestrator | Saturday 28 March 2026 00:55:47 +0000 (0:00:03.131) 0:05:04.452 ******** 2026-03-28 00:57:52.735006 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-28 00:57:52.735012 | orchestrator | 2026-03-28 00:57:52.735018 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-28 00:57:52.735024 | orchestrator | Saturday 28 March 2026 00:55:48 +0000 (0:00:01.564) 0:05:06.017 ******** 2026-03-28 00:57:52.735029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:52.735036 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.735042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:52.735050 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.735080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:52.735088 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.735096 | orchestrator | 2026-03-28 00:57:52.735103 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-28 00:57:52.735110 | orchestrator | Saturday 28 March 2026 00:55:50 +0000 (0:00:01.322) 0:05:07.340 ******** 2026-03-28 00:57:52.735117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:52.735125 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.735136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:52.735150 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.735158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:52.735165 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.735171 | orchestrator | 2026-03-28 00:57:52.735177 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-28 00:57:52.735184 | orchestrator | Saturday 28 March 2026 00:55:51 +0000 (0:00:01.435) 0:05:08.776 ******** 2026-03-28 00:57:52.735191 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.735197 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.735204 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.735211 | orchestrator | 2026-03-28 00:57:52.735217 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 00:57:52.735224 | orchestrator | Saturday 28 March 2026 00:55:53 +0000 (0:00:01.931) 0:05:10.708 ******** 2026-03-28 00:57:52.735230 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.735237 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.735243 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.735249 | orchestrator | 2026-03-28 00:57:52.735256 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 00:57:52.735263 | orchestrator | Saturday 28 March 2026 00:55:56 +0000 (0:00:02.490) 0:05:13.198 ******** 2026-03-28 00:57:52.735269 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.735276 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.735282 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.735288 | orchestrator | 2026-03-28 00:57:52.735295 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-28 00:57:52.735302 | orchestrator | Saturday 28 March 2026 00:55:59 +0000 (0:00:03.184) 0:05:16.383 ******** 2026-03-28 00:57:52.735309 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-28 00:57:52.735316 | orchestrator | 2026-03-28 00:57:52.735322 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-28 00:57:52.735329 | orchestrator | Saturday 28 March 2026 00:56:00 +0000 (0:00:00.866) 0:05:17.250 ******** 2026-03-28 00:57:52.735357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:57:52.735365 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.735427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:57:52.735441 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.735449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:57:52.735455 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.735463 | orchestrator | 2026-03-28 00:57:52.735470 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-28 00:57:52.735476 | orchestrator | Saturday 28 March 2026 00:56:01 +0000 (0:00:01.427) 0:05:18.677 ******** 2026-03-28 00:57:52.735488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:57:52.735495 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.735502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:57:52.735509 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.735516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:57:52.735524 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.735530 | orchestrator | 2026-03-28 00:57:52.735537 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-28 00:57:52.735544 | orchestrator | Saturday 28 March 2026 00:56:03 +0000 (0:00:01.476) 0:05:20.154 ******** 2026-03-28 00:57:52.735550 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.735556 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.735563 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.735570 | orchestrator | 2026-03-28 00:57:52.735578 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 00:57:52.735585 | orchestrator | Saturday 28 March 2026 00:56:04 +0000 (0:00:01.721) 0:05:21.875 ******** 2026-03-28 00:57:52.735591 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.735597 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.735604 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.735612 | orchestrator | 2026-03-28 00:57:52.735618 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 00:57:52.735630 | orchestrator | Saturday 28 March 2026 00:56:07 +0000 (0:00:02.543) 0:05:24.418 ******** 2026-03-28 00:57:52.735637 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.735643 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.735650 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.735657 | orchestrator | 2026-03-28 00:57:52.735665 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-28 00:57:52.735671 | orchestrator | Saturday 28 March 2026 00:56:10 +0000 (0:00:03.465) 0:05:27.884 ******** 2026-03-28 00:57:52.735705 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.735712 | orchestrator | 2026-03-28 00:57:52.735719 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-28 00:57:52.735726 | orchestrator | Saturday 28 March 2026 00:56:12 +0000 (0:00:01.646) 0:05:29.531 ******** 2026-03-28 00:57:52.735734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.735745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.735752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:57:52.735759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:57:52.735767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.735800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.735807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.735818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.735825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.735832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.735840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.735869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:57:52.735877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.735885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.735898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.735905 | orchestrator | 2026-03-28 00:57:52.735911 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-28 00:57:52.735918 | orchestrator | Saturday 28 March 2026 00:56:16 +0000 (0:00:03.871) 0:05:33.402 ******** 2026-03-28 00:57:52.735925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.735937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:57:52.735963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.735970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.735978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.735985 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.735995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.736003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:57:52.736070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.736102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.736112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.736119 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.736130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.736138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:57:52.736144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.736157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:57:52.736183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:52.736191 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.736197 | orchestrator | 2026-03-28 00:57:52.736204 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-28 00:57:52.736210 | orchestrator | Saturday 28 March 2026 00:56:17 +0000 (0:00:00.770) 0:05:34.173 ******** 2026-03-28 00:57:52.736217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:57:52.736225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:57:52.736271 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.736278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:57:52.736285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:57:52.736292 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.736298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:57:52.736311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:57:52.736319 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.736326 | orchestrator | 2026-03-28 00:57:52.736333 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-28 00:57:52.736340 | orchestrator | Saturday 28 March 2026 00:56:18 +0000 (0:00:01.569) 0:05:35.742 ******** 2026-03-28 00:57:52.736351 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.736357 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.736366 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.736438 | orchestrator | 2026-03-28 00:57:52.736444 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-28 00:57:52.736449 | orchestrator | Saturday 28 March 2026 00:56:20 +0000 (0:00:01.425) 0:05:37.168 ******** 2026-03-28 00:57:52.736454 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.736460 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.736466 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.736473 | orchestrator | 2026-03-28 00:57:52.736479 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-28 00:57:52.736485 | orchestrator | Saturday 28 March 2026 00:56:22 +0000 (0:00:02.257) 0:05:39.425 ******** 2026-03-28 00:57:52.736491 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.736496 | orchestrator | 2026-03-28 00:57:52.736501 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-28 00:57:52.736507 | orchestrator | Saturday 28 March 2026 00:56:23 +0000 (0:00:01.479) 0:05:40.905 ******** 2026-03-28 00:57:52.736515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 00:57:52.736553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 00:57:52.736561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 00:57:52.736574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 00:57:52.736594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 00:57:52.736621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 00:57:52.736630 | orchestrator | 2026-03-28 00:57:52.736637 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-28 00:57:52.736644 | orchestrator | Saturday 28 March 2026 00:56:29 +0000 (0:00:05.827) 0:05:46.733 ******** 2026-03-28 00:57:52.736651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 00:57:52.736667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 00:57:52.736674 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.736679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 00:57:52.736702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 00:57:52.736709 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.736714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 00:57:52.736729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 00:57:52.736735 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.736741 | orchestrator | 2026-03-28 00:57:52.736746 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-28 00:57:52.736752 | orchestrator | Saturday 28 March 2026 00:56:30 +0000 (0:00:00.668) 0:05:47.402 ******** 2026-03-28 00:57:52.736758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-28 00:57:52.736764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 00:57:52.736770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 00:57:52.736775 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.736781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-28 00:57:52.736786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 00:57:52.736792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 00:57:52.736797 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.736802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-28 00:57:52.736825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 00:57:52.736833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 00:57:52.736842 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.736856 | orchestrator | 2026-03-28 00:57:52.736862 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-28 00:57:52.736867 | orchestrator | Saturday 28 March 2026 00:56:31 +0000 (0:00:00.975) 0:05:48.377 ******** 2026-03-28 00:57:52.736874 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.736879 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.736885 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.736890 | orchestrator | 2026-03-28 00:57:52.736896 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-28 00:57:52.736902 | orchestrator | Saturday 28 March 2026 00:56:32 +0000 (0:00:00.825) 0:05:49.203 ******** 2026-03-28 00:57:52.736909 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.736915 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.736920 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.736925 | orchestrator | 2026-03-28 00:57:52.736930 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-28 00:57:52.736937 | orchestrator | Saturday 28 March 2026 00:56:33 +0000 (0:00:01.427) 0:05:50.630 ******** 2026-03-28 00:57:52.736942 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.736948 | orchestrator | 2026-03-28 00:57:52.736953 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-28 00:57:52.736959 | orchestrator | Saturday 28 March 2026 00:56:35 +0000 (0:00:01.516) 0:05:52.147 ******** 2026-03-28 00:57:52.736973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 00:57:52.736981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 00:57:52.736987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:57:52.736995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:57:52.737037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:57:52.737061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:57:52.737074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 00:57:52.737105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:57:52.737113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:57:52.737135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 00:57:52.737142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 00:57:52.737158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:57:52.737182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 00:57:52.737189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 00:57:52.737196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:57:52.737227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 00:57:52.737231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 00:57:52.737235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:57:52.737251 | orchestrator | 2026-03-28 00:57:52.737258 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-28 00:57:52.737262 | orchestrator | Saturday 28 March 2026 00:56:39 +0000 (0:00:04.764) 0:05:56.911 ******** 2026-03-28 00:57:52.737266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 00:57:52.737270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:57:52.737277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:57:52.737293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 00:57:52.737312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 00:57:52.737317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:57:52.737332 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.737336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 00:57:52.737344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:57:52.737351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:57:52.737366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 00:57:52.737388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 00:57:52.737397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 00:57:52.737413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:57:52.737417 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.737424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:57:52.737428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:57:52.737446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 00:57:52.737451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 00:57:52.737457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:52.737469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:57:52.737473 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.737477 | orchestrator | 2026-03-28 00:57:52.737481 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-28 00:57:52.737485 | orchestrator | Saturday 28 March 2026 00:56:41 +0000 (0:00:01.454) 0:05:58.366 ******** 2026-03-28 00:57:52.737489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-28 00:57:52.737493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-28 00:57:52.737497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 00:57:52.737502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 00:57:52.737509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-28 00:57:52.737513 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.737517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-28 00:57:52.737521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 00:57:52.737525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-28 00:57:52.737529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 00:57:52.737533 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.737540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-28 00:57:52.737544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 00:57:52.737553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 00:57:52.737556 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.737560 | orchestrator | 2026-03-28 00:57:52.737564 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-28 00:57:52.737568 | orchestrator | Saturday 28 March 2026 00:56:42 +0000 (0:00:01.054) 0:05:59.420 ******** 2026-03-28 00:57:52.737572 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.737575 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.737579 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.737583 | orchestrator | 2026-03-28 00:57:52.737587 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-28 00:57:52.737591 | orchestrator | Saturday 28 March 2026 00:56:42 +0000 (0:00:00.397) 0:05:59.818 ******** 2026-03-28 00:57:52.737594 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.737598 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.737602 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.737605 | orchestrator | 2026-03-28 00:57:52.737609 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-28 00:57:52.737613 | orchestrator | Saturday 28 March 2026 00:56:43 +0000 (0:00:01.310) 0:06:01.128 ******** 2026-03-28 00:57:52.737617 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.737621 | orchestrator | 2026-03-28 00:57:52.737624 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-28 00:57:52.737628 | orchestrator | Saturday 28 March 2026 00:56:45 +0000 (0:00:01.625) 0:06:02.754 ******** 2026-03-28 00:57:52.737634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:57:52.737639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:57:52.737649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:57:52.737653 | orchestrator | 2026-03-28 00:57:52.737657 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-28 00:57:52.737661 | orchestrator | Saturday 28 March 2026 00:56:47 +0000 (0:00:02.352) 0:06:05.107 ******** 2026-03-28 00:57:52.737665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:57:52.737669 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.737675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:57:52.737680 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.737683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:57:52.737692 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.737695 | orchestrator | 2026-03-28 00:57:52.737699 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-28 00:57:52.737703 | orchestrator | Saturday 28 March 2026 00:56:48 +0000 (0:00:00.346) 0:06:05.453 ******** 2026-03-28 00:57:52.737707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 00:57:52.737715 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.737718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 00:57:52.737722 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.737726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 00:57:52.737730 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.737734 | orchestrator | 2026-03-28 00:57:52.737737 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-28 00:57:52.737741 | orchestrator | Saturday 28 March 2026 00:56:49 +0000 (0:00:00.864) 0:06:06.318 ******** 2026-03-28 00:57:52.737745 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.737749 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.737753 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.737756 | orchestrator | 2026-03-28 00:57:52.737760 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-28 00:57:52.737764 | orchestrator | Saturday 28 March 2026 00:56:49 +0000 (0:00:00.446) 0:06:06.764 ******** 2026-03-28 00:57:52.737768 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.737772 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.737775 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.737779 | orchestrator | 2026-03-28 00:57:52.737783 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-28 00:57:52.737787 | orchestrator | Saturday 28 March 2026 00:56:50 +0000 (0:00:01.234) 0:06:07.999 ******** 2026-03-28 00:57:52.737790 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:52.737794 | orchestrator | 2026-03-28 00:57:52.737798 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-28 00:57:52.737802 | orchestrator | Saturday 28 March 2026 00:56:52 +0000 (0:00:01.634) 0:06:09.633 ******** 2026-03-28 00:57:52.737806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.737813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.737823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.737827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.737832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.737838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:52.737845 | orchestrator | 2026-03-28 00:57:52.737849 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-28 00:57:52.737853 | orchestrator | Saturday 28 March 2026 00:56:58 +0000 (0:00:05.608) 0:06:15.241 ******** 2026-03-28 00:57:52.737857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.737863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.737868 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.737871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.737878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.737887 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.737891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.737895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 00:57:52.737899 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.737903 | orchestrator | 2026-03-28 00:57:52.737906 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-28 00:57:52.737910 | orchestrator | Saturday 28 March 2026 00:56:58 +0000 (0:00:00.655) 0:06:15.897 ******** 2026-03-28 00:57:52.737914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 00:57:52.737919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 00:57:52.737923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 00:57:52.737927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 00:57:52.737933 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.737999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 00:57:52.738042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 00:57:52.738047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 00:57:52.738055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 00:57:52.738059 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.738063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 00:57:52.738067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 00:57:52.738071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 00:57:52.738075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 00:57:52.738079 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.738082 | orchestrator | 2026-03-28 00:57:52.738086 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-28 00:57:52.738090 | orchestrator | Saturday 28 March 2026 00:57:00 +0000 (0:00:01.472) 0:06:17.370 ******** 2026-03-28 00:57:52.738094 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.738098 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.738101 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.738105 | orchestrator | 2026-03-28 00:57:52.738109 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-28 00:57:52.738116 | orchestrator | Saturday 28 March 2026 00:57:01 +0000 (0:00:01.514) 0:06:18.884 ******** 2026-03-28 00:57:52.738119 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.738123 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.738127 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.738131 | orchestrator | 2026-03-28 00:57:52.738134 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-28 00:57:52.738138 | orchestrator | Saturday 28 March 2026 00:57:03 +0000 (0:00:02.189) 0:06:21.073 ******** 2026-03-28 00:57:52.738142 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.738146 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.738149 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.738153 | orchestrator | 2026-03-28 00:57:52.738157 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-28 00:57:52.738161 | orchestrator | Saturday 28 March 2026 00:57:04 +0000 (0:00:00.360) 0:06:21.434 ******** 2026-03-28 00:57:52.738165 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.738168 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.738172 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.738180 | orchestrator | 2026-03-28 00:57:52.738183 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-28 00:57:52.738187 | orchestrator | Saturday 28 March 2026 00:57:04 +0000 (0:00:00.352) 0:06:21.786 ******** 2026-03-28 00:57:52.738191 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.738195 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.738198 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.738202 | orchestrator | 2026-03-28 00:57:52.738206 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-28 00:57:52.738210 | orchestrator | Saturday 28 March 2026 00:57:05 +0000 (0:00:00.659) 0:06:22.446 ******** 2026-03-28 00:57:52.738214 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.738218 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.738222 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.738225 | orchestrator | 2026-03-28 00:57:52.738229 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-28 00:57:52.738233 | orchestrator | Saturday 28 March 2026 00:57:05 +0000 (0:00:00.336) 0:06:22.782 ******** 2026-03-28 00:57:52.738237 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.738240 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.738244 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.738248 | orchestrator | 2026-03-28 00:57:52.738252 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-28 00:57:52.738255 | orchestrator | Saturday 28 March 2026 00:57:06 +0000 (0:00:00.390) 0:06:23.173 ******** 2026-03-28 00:57:52.738259 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.738263 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.738267 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.738270 | orchestrator | 2026-03-28 00:57:52.738274 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-28 00:57:52.738278 | orchestrator | Saturday 28 March 2026 00:57:06 +0000 (0:00:00.891) 0:06:24.064 ******** 2026-03-28 00:57:52.738282 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.738286 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.738289 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.738293 | orchestrator | 2026-03-28 00:57:52.738297 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-28 00:57:52.738301 | orchestrator | Saturday 28 March 2026 00:57:07 +0000 (0:00:00.698) 0:06:24.763 ******** 2026-03-28 00:57:52.738304 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.738308 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.738312 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.738316 | orchestrator | 2026-03-28 00:57:52.738319 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-28 00:57:52.738323 | orchestrator | Saturday 28 March 2026 00:57:07 +0000 (0:00:00.358) 0:06:25.122 ******** 2026-03-28 00:57:52.738327 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.738331 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.738335 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.738338 | orchestrator | 2026-03-28 00:57:52.738344 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-28 00:57:52.738348 | orchestrator | Saturday 28 March 2026 00:57:08 +0000 (0:00:00.963) 0:06:26.085 ******** 2026-03-28 00:57:52.738352 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.738356 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.738359 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.738363 | orchestrator | 2026-03-28 00:57:52.738378 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-28 00:57:52.738385 | orchestrator | Saturday 28 March 2026 00:57:10 +0000 (0:00:01.413) 0:06:27.498 ******** 2026-03-28 00:57:52.738391 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.738398 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.738404 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.738410 | orchestrator | 2026-03-28 00:57:52.738416 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-28 00:57:52.738427 | orchestrator | Saturday 28 March 2026 00:57:11 +0000 (0:00:00.971) 0:06:28.470 ******** 2026-03-28 00:57:52.738433 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.738439 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.738446 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.738451 | orchestrator | 2026-03-28 00:57:52.738457 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-28 00:57:52.738464 | orchestrator | Saturday 28 March 2026 00:57:21 +0000 (0:00:10.033) 0:06:38.504 ******** 2026-03-28 00:57:52.738470 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.738477 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.738483 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.738490 | orchestrator | 2026-03-28 00:57:52.738497 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-28 00:57:52.738504 | orchestrator | Saturday 28 March 2026 00:57:22 +0000 (0:00:00.891) 0:06:39.396 ******** 2026-03-28 00:57:52.738509 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.738513 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.738516 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.738520 | orchestrator | 2026-03-28 00:57:52.738524 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-28 00:57:52.738528 | orchestrator | Saturday 28 March 2026 00:57:32 +0000 (0:00:10.448) 0:06:49.845 ******** 2026-03-28 00:57:52.738531 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.738539 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.738542 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.738546 | orchestrator | 2026-03-28 00:57:52.738550 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-28 00:57:52.738554 | orchestrator | Saturday 28 March 2026 00:57:36 +0000 (0:00:04.162) 0:06:54.007 ******** 2026-03-28 00:57:52.738557 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:52.738561 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:52.738565 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:52.738569 | orchestrator | 2026-03-28 00:57:52.738572 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-28 00:57:52.738576 | orchestrator | Saturday 28 March 2026 00:57:46 +0000 (0:00:09.608) 0:07:03.615 ******** 2026-03-28 00:57:52.738580 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.738584 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.738588 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.738591 | orchestrator | 2026-03-28 00:57:52.738595 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-28 00:57:52.738599 | orchestrator | Saturday 28 March 2026 00:57:46 +0000 (0:00:00.389) 0:07:04.005 ******** 2026-03-28 00:57:52.738603 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.738606 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.738610 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.738614 | orchestrator | 2026-03-28 00:57:52.738618 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-28 00:57:52.738621 | orchestrator | Saturday 28 March 2026 00:57:47 +0000 (0:00:00.401) 0:07:04.406 ******** 2026-03-28 00:57:52.738625 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.738629 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.738633 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.738636 | orchestrator | 2026-03-28 00:57:52.738640 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-28 00:57:52.738644 | orchestrator | Saturday 28 March 2026 00:57:48 +0000 (0:00:00.736) 0:07:05.142 ******** 2026-03-28 00:57:52.738648 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.738651 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.738655 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.738659 | orchestrator | 2026-03-28 00:57:52.738663 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-28 00:57:52.738667 | orchestrator | Saturday 28 March 2026 00:57:48 +0000 (0:00:00.383) 0:07:05.526 ******** 2026-03-28 00:57:52.738675 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.738678 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.738682 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.738686 | orchestrator | 2026-03-28 00:57:52.738690 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-28 00:57:52.738693 | orchestrator | Saturday 28 March 2026 00:57:48 +0000 (0:00:00.405) 0:07:05.932 ******** 2026-03-28 00:57:52.738697 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:52.738701 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:52.738705 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:52.738708 | orchestrator | 2026-03-28 00:57:52.738712 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-28 00:57:52.738716 | orchestrator | Saturday 28 March 2026 00:57:49 +0000 (0:00:00.380) 0:07:06.312 ******** 2026-03-28 00:57:52.738720 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.738724 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.738727 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.738731 | orchestrator | 2026-03-28 00:57:52.738735 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-28 00:57:52.738739 | orchestrator | Saturday 28 March 2026 00:57:50 +0000 (0:00:01.443) 0:07:07.755 ******** 2026-03-28 00:57:52.738742 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:52.738746 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:52.738750 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:52.738754 | orchestrator | 2026-03-28 00:57:52.738757 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:57:52.738764 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-28 00:57:52.738769 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-28 00:57:52.738773 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-28 00:57:52.738776 | orchestrator | 2026-03-28 00:57:52.738780 | orchestrator | 2026-03-28 00:57:52.738784 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:57:52.738788 | orchestrator | Saturday 28 March 2026 00:57:51 +0000 (0:00:00.855) 0:07:08.611 ******** 2026-03-28 00:57:52.738791 | orchestrator | =============================================================================== 2026-03-28 00:57:52.738795 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.45s 2026-03-28 00:57:52.738799 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.03s 2026-03-28 00:57:52.738803 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.61s 2026-03-28 00:57:52.738806 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.85s 2026-03-28 00:57:52.738810 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.99s 2026-03-28 00:57:52.738814 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.87s 2026-03-28 00:57:52.738818 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.83s 2026-03-28 00:57:52.738821 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.68s 2026-03-28 00:57:52.738825 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.61s 2026-03-28 00:57:52.738832 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.25s 2026-03-28 00:57:52.738836 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.19s 2026-03-28 00:57:52.738839 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.90s 2026-03-28 00:57:52.738843 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.76s 2026-03-28 00:57:52.738850 | orchestrator | proxysql-config : Copying over aodh ProxySQL rules config --------------- 4.74s 2026-03-28 00:57:52.738854 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.74s 2026-03-28 00:57:52.738857 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.69s 2026-03-28 00:57:52.738861 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.63s 2026-03-28 00:57:52.738865 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.38s 2026-03-28 00:57:52.738868 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.26s 2026-03-28 00:57:52.738872 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.16s 2026-03-28 00:57:52.738876 | orchestrator | 2026-03-28 00:57:52 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:52.738880 | orchestrator | 2026-03-28 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:55.772460 | orchestrator | 2026-03-28 00:57:55 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:57:55.775614 | orchestrator | 2026-03-28 00:57:55 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:57:55.777999 | orchestrator | 2026-03-28 00:57:55 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:55.778104 | orchestrator | 2026-03-28 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:58.824006 | orchestrator | 2026-03-28 00:57:58 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:57:58.825567 | orchestrator | 2026-03-28 00:57:58 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:57:58.827228 | orchestrator | 2026-03-28 00:57:58 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:57:58.827584 | orchestrator | 2026-03-28 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:01.874813 | orchestrator | 2026-03-28 00:58:01 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:01.875500 | orchestrator | 2026-03-28 00:58:01 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:01.877214 | orchestrator | 2026-03-28 00:58:01 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:01.877284 | orchestrator | 2026-03-28 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:04.927230 | orchestrator | 2026-03-28 00:58:04 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:04.927314 | orchestrator | 2026-03-28 00:58:04 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:04.927321 | orchestrator | 2026-03-28 00:58:04 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:04.927326 | orchestrator | 2026-03-28 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:07.979474 | orchestrator | 2026-03-28 00:58:07 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:07.979575 | orchestrator | 2026-03-28 00:58:07 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:07.979589 | orchestrator | 2026-03-28 00:58:07 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:07.979600 | orchestrator | 2026-03-28 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:11.012924 | orchestrator | 2026-03-28 00:58:11 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:11.013415 | orchestrator | 2026-03-28 00:58:11 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:11.014467 | orchestrator | 2026-03-28 00:58:11 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:11.014501 | orchestrator | 2026-03-28 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:14.055054 | orchestrator | 2026-03-28 00:58:14 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:14.055655 | orchestrator | 2026-03-28 00:58:14 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:14.056542 | orchestrator | 2026-03-28 00:58:14 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:14.056791 | orchestrator | 2026-03-28 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:17.103140 | orchestrator | 2026-03-28 00:58:17 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:17.103271 | orchestrator | 2026-03-28 00:58:17 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:17.103752 | orchestrator | 2026-03-28 00:58:17 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:17.103820 | orchestrator | 2026-03-28 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:20.149430 | orchestrator | 2026-03-28 00:58:20 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:20.150099 | orchestrator | 2026-03-28 00:58:20 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:20.150698 | orchestrator | 2026-03-28 00:58:20 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:20.150736 | orchestrator | 2026-03-28 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:23.183864 | orchestrator | 2026-03-28 00:58:23 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:23.183965 | orchestrator | 2026-03-28 00:58:23 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:23.184379 | orchestrator | 2026-03-28 00:58:23 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:23.184415 | orchestrator | 2026-03-28 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:26.230547 | orchestrator | 2026-03-28 00:58:26 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:26.234147 | orchestrator | 2026-03-28 00:58:26 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:26.237239 | orchestrator | 2026-03-28 00:58:26 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:26.237298 | orchestrator | 2026-03-28 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:29.383075 | orchestrator | 2026-03-28 00:58:29 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:29.385611 | orchestrator | 2026-03-28 00:58:29 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:29.388756 | orchestrator | 2026-03-28 00:58:29 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:29.389797 | orchestrator | 2026-03-28 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:32.425916 | orchestrator | 2026-03-28 00:58:32 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:32.429545 | orchestrator | 2026-03-28 00:58:32 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:32.432748 | orchestrator | 2026-03-28 00:58:32 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:32.432829 | orchestrator | 2026-03-28 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:35.531358 | orchestrator | 2026-03-28 00:58:35 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:35.534493 | orchestrator | 2026-03-28 00:58:35 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:35.537642 | orchestrator | 2026-03-28 00:58:35 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:35.537723 | orchestrator | 2026-03-28 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:38.599464 | orchestrator | 2026-03-28 00:58:38 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:38.601185 | orchestrator | 2026-03-28 00:58:38 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:38.603503 | orchestrator | 2026-03-28 00:58:38 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:38.603584 | orchestrator | 2026-03-28 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:41.664010 | orchestrator | 2026-03-28 00:58:41 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:41.668501 | orchestrator | 2026-03-28 00:58:41 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:41.670631 | orchestrator | 2026-03-28 00:58:41 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:41.670645 | orchestrator | 2026-03-28 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:44.721835 | orchestrator | 2026-03-28 00:58:44 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:44.723839 | orchestrator | 2026-03-28 00:58:44 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:44.725442 | orchestrator | 2026-03-28 00:58:44 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:44.725689 | orchestrator | 2026-03-28 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:47.773812 | orchestrator | 2026-03-28 00:58:47 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:47.774347 | orchestrator | 2026-03-28 00:58:47 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:47.776718 | orchestrator | 2026-03-28 00:58:47 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:47.776783 | orchestrator | 2026-03-28 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:50.817932 | orchestrator | 2026-03-28 00:58:50 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:50.819456 | orchestrator | 2026-03-28 00:58:50 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:50.820473 | orchestrator | 2026-03-28 00:58:50 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:50.820550 | orchestrator | 2026-03-28 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:53.879875 | orchestrator | 2026-03-28 00:58:53 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:53.881419 | orchestrator | 2026-03-28 00:58:53 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:53.883188 | orchestrator | 2026-03-28 00:58:53 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:53.883547 | orchestrator | 2026-03-28 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:56.929820 | orchestrator | 2026-03-28 00:58:56 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:56.932950 | orchestrator | 2026-03-28 00:58:56 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:56.935092 | orchestrator | 2026-03-28 00:58:56 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:56.935387 | orchestrator | 2026-03-28 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:59.980969 | orchestrator | 2026-03-28 00:58:59 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:58:59.986116 | orchestrator | 2026-03-28 00:58:59 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:58:59.987911 | orchestrator | 2026-03-28 00:58:59 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:58:59.987996 | orchestrator | 2026-03-28 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:03.032641 | orchestrator | 2026-03-28 00:59:03 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:03.035523 | orchestrator | 2026-03-28 00:59:03 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:03.036898 | orchestrator | 2026-03-28 00:59:03 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:03.037549 | orchestrator | 2026-03-28 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:06.086072 | orchestrator | 2026-03-28 00:59:06 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:06.087662 | orchestrator | 2026-03-28 00:59:06 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:06.089557 | orchestrator | 2026-03-28 00:59:06 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:06.089587 | orchestrator | 2026-03-28 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:09.137692 | orchestrator | 2026-03-28 00:59:09 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:09.140368 | orchestrator | 2026-03-28 00:59:09 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:09.143939 | orchestrator | 2026-03-28 00:59:09 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:09.143989 | orchestrator | 2026-03-28 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:12.184453 | orchestrator | 2026-03-28 00:59:12 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:12.185402 | orchestrator | 2026-03-28 00:59:12 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:12.186801 | orchestrator | 2026-03-28 00:59:12 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:12.187444 | orchestrator | 2026-03-28 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:15.227184 | orchestrator | 2026-03-28 00:59:15 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:15.228154 | orchestrator | 2026-03-28 00:59:15 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:15.229712 | orchestrator | 2026-03-28 00:59:15 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:15.229873 | orchestrator | 2026-03-28 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:18.275225 | orchestrator | 2026-03-28 00:59:18 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:18.278716 | orchestrator | 2026-03-28 00:59:18 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:18.280812 | orchestrator | 2026-03-28 00:59:18 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:18.281492 | orchestrator | 2026-03-28 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:21.335209 | orchestrator | 2026-03-28 00:59:21 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:21.336256 | orchestrator | 2026-03-28 00:59:21 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:21.339590 | orchestrator | 2026-03-28 00:59:21 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:21.339675 | orchestrator | 2026-03-28 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:24.382448 | orchestrator | 2026-03-28 00:59:24 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:24.384039 | orchestrator | 2026-03-28 00:59:24 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:24.384861 | orchestrator | 2026-03-28 00:59:24 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:24.385006 | orchestrator | 2026-03-28 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:27.424723 | orchestrator | 2026-03-28 00:59:27 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:27.424995 | orchestrator | 2026-03-28 00:59:27 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:27.427142 | orchestrator | 2026-03-28 00:59:27 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:27.427183 | orchestrator | 2026-03-28 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:30.471708 | orchestrator | 2026-03-28 00:59:30 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:30.471884 | orchestrator | 2026-03-28 00:59:30 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:30.473213 | orchestrator | 2026-03-28 00:59:30 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:30.473323 | orchestrator | 2026-03-28 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:33.516755 | orchestrator | 2026-03-28 00:59:33 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:33.517948 | orchestrator | 2026-03-28 00:59:33 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:33.520111 | orchestrator | 2026-03-28 00:59:33 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:33.521065 | orchestrator | 2026-03-28 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:36.563065 | orchestrator | 2026-03-28 00:59:36 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:36.563973 | orchestrator | 2026-03-28 00:59:36 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:36.565268 | orchestrator | 2026-03-28 00:59:36 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:36.565315 | orchestrator | 2026-03-28 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:39.608518 | orchestrator | 2026-03-28 00:59:39 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:39.609562 | orchestrator | 2026-03-28 00:59:39 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:39.611047 | orchestrator | 2026-03-28 00:59:39 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:39.611087 | orchestrator | 2026-03-28 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:42.657008 | orchestrator | 2026-03-28 00:59:42 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:42.658490 | orchestrator | 2026-03-28 00:59:42 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:42.659936 | orchestrator | 2026-03-28 00:59:42 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:42.659976 | orchestrator | 2026-03-28 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:45.707344 | orchestrator | 2026-03-28 00:59:45 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:45.708576 | orchestrator | 2026-03-28 00:59:45 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:45.709681 | orchestrator | 2026-03-28 00:59:45 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:45.709728 | orchestrator | 2026-03-28 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:48.752011 | orchestrator | 2026-03-28 00:59:48 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:48.755798 | orchestrator | 2026-03-28 00:59:48 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:48.758383 | orchestrator | 2026-03-28 00:59:48 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:48.758468 | orchestrator | 2026-03-28 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:51.798298 | orchestrator | 2026-03-28 00:59:51 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:51.799904 | orchestrator | 2026-03-28 00:59:51 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:51.801169 | orchestrator | 2026-03-28 00:59:51 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:51.801423 | orchestrator | 2026-03-28 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:54.855037 | orchestrator | 2026-03-28 00:59:54 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:54.855136 | orchestrator | 2026-03-28 00:59:54 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:54.857647 | orchestrator | 2026-03-28 00:59:54 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:54.858212 | orchestrator | 2026-03-28 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:57.907718 | orchestrator | 2026-03-28 00:59:57 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 00:59:57.910870 | orchestrator | 2026-03-28 00:59:57 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 00:59:57.914313 | orchestrator | 2026-03-28 00:59:57 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 00:59:57.914338 | orchestrator | 2026-03-28 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:00.979605 | orchestrator | 2026-03-28 01:00:00 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:00.981873 | orchestrator | 2026-03-28 01:00:00 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:00.984080 | orchestrator | 2026-03-28 01:00:00 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 01:00:00.984454 | orchestrator | 2026-03-28 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:04.042592 | orchestrator | 2026-03-28 01:00:04 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:04.052129 | orchestrator | 2026-03-28 01:00:04 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:04.052264 | orchestrator | 2026-03-28 01:00:04 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state STARTED 2026-03-28 01:00:04.052278 | orchestrator | 2026-03-28 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:07.104894 | orchestrator | 2026-03-28 01:00:07 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:07.106558 | orchestrator | 2026-03-28 01:00:07 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:07.113973 | orchestrator | 2026-03-28 01:00:07 | INFO  | Task 90d02355-39ec-41ea-af8c-b97d8fadfc6f is in state SUCCESS 2026-03-28 01:00:07.115772 | orchestrator | 2026-03-28 01:00:07.115836 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 01:00:07.115848 | orchestrator | 2.16.14 2026-03-28 01:00:07.115857 | orchestrator | 2026-03-28 01:00:07.115866 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-28 01:00:07.115875 | orchestrator | 2026-03-28 01:00:07.115883 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 01:00:07.115892 | orchestrator | Saturday 28 March 2026 00:47:41 +0000 (0:00:01.077) 0:00:01.078 ******** 2026-03-28 01:00:07.115901 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.115909 | orchestrator | 2026-03-28 01:00:07.115917 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 01:00:07.115924 | orchestrator | Saturday 28 March 2026 00:47:42 +0000 (0:00:01.389) 0:00:02.467 ******** 2026-03-28 01:00:07.115932 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.115940 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.115947 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.115953 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.115961 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.115968 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.115975 | orchestrator | 2026-03-28 01:00:07.115984 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 01:00:07.115992 | orchestrator | Saturday 28 March 2026 00:47:44 +0000 (0:00:02.157) 0:00:04.625 ******** 2026-03-28 01:00:07.116000 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.116007 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.116015 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.116022 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.116029 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.116037 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.116045 | orchestrator | 2026-03-28 01:00:07.116053 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 01:00:07.116061 | orchestrator | Saturday 28 March 2026 00:47:45 +0000 (0:00:00.720) 0:00:05.345 ******** 2026-03-28 01:00:07.116069 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.116111 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.116119 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.116126 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.116135 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.116143 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.116231 | orchestrator | 2026-03-28 01:00:07.116242 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 01:00:07.116249 | orchestrator | Saturday 28 March 2026 00:47:46 +0000 (0:00:00.972) 0:00:06.317 ******** 2026-03-28 01:00:07.116297 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.116305 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.116314 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.116374 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.116385 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.116394 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.116402 | orchestrator | 2026-03-28 01:00:07.116409 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 01:00:07.116416 | orchestrator | Saturday 28 March 2026 00:47:47 +0000 (0:00:00.806) 0:00:07.124 ******** 2026-03-28 01:00:07.116424 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.116432 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.116439 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.116447 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.116456 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.116464 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.116473 | orchestrator | 2026-03-28 01:00:07.116482 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 01:00:07.116489 | orchestrator | Saturday 28 March 2026 00:47:47 +0000 (0:00:00.554) 0:00:07.678 ******** 2026-03-28 01:00:07.116497 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.116506 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.116514 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.116522 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.116530 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.116538 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.116546 | orchestrator | 2026-03-28 01:00:07.116555 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 01:00:07.116563 | orchestrator | Saturday 28 March 2026 00:47:48 +0000 (0:00:00.797) 0:00:08.476 ******** 2026-03-28 01:00:07.116572 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.116581 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.116590 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.116598 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.116606 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.116615 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.116623 | orchestrator | 2026-03-28 01:00:07.116632 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 01:00:07.116640 | orchestrator | Saturday 28 March 2026 00:47:49 +0000 (0:00:00.804) 0:00:09.281 ******** 2026-03-28 01:00:07.116647 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.116656 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.116665 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.116673 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.116682 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.116690 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.116699 | orchestrator | 2026-03-28 01:00:07.116707 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 01:00:07.116715 | orchestrator | Saturday 28 March 2026 00:47:50 +0000 (0:00:00.866) 0:00:10.147 ******** 2026-03-28 01:00:07.116738 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 01:00:07.116747 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:00:07.116754 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:00:07.116761 | orchestrator | 2026-03-28 01:00:07.116768 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 01:00:07.116777 | orchestrator | Saturday 28 March 2026 00:47:51 +0000 (0:00:00.943) 0:00:11.090 ******** 2026-03-28 01:00:07.116785 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.116794 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.116812 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.116836 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.116845 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.116853 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.116862 | orchestrator | 2026-03-28 01:00:07.116870 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 01:00:07.116878 | orchestrator | Saturday 28 March 2026 00:47:52 +0000 (0:00:01.589) 0:00:12.680 ******** 2026-03-28 01:00:07.116886 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 01:00:07.116895 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:00:07.116903 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:00:07.116911 | orchestrator | 2026-03-28 01:00:07.116920 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 01:00:07.116928 | orchestrator | Saturday 28 March 2026 00:47:55 +0000 (0:00:02.409) 0:00:15.090 ******** 2026-03-28 01:00:07.116937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 01:00:07.116945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 01:00:07.116954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 01:00:07.116961 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.116969 | orchestrator | 2026-03-28 01:00:07.116975 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 01:00:07.116982 | orchestrator | Saturday 28 March 2026 00:47:55 +0000 (0:00:00.696) 0:00:15.786 ******** 2026-03-28 01:00:07.116992 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.117002 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.117044 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.117053 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.117060 | orchestrator | 2026-03-28 01:00:07.117067 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 01:00:07.117074 | orchestrator | Saturday 28 March 2026 00:47:57 +0000 (0:00:01.126) 0:00:16.913 ******** 2026-03-28 01:00:07.117083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.117094 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.117101 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.117293 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.117305 | orchestrator | 2026-03-28 01:00:07.117314 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 01:00:07.117322 | orchestrator | Saturday 28 March 2026 00:47:57 +0000 (0:00:00.323) 0:00:17.237 ******** 2026-03-28 01:00:07.117364 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 00:47:53.557246', 'end': '2026-03-28 00:47:53.657116', 'delta': '0:00:00.099870', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.117376 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 00:47:54.269196', 'end': '2026-03-28 00:47:54.371774', 'delta': '0:00:00.102578', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.117384 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 00:47:55.019930', 'end': '2026-03-28 00:47:55.113658', 'delta': '0:00:00.093728', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.117392 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.117401 | orchestrator | 2026-03-28 01:00:07.117409 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 01:00:07.117416 | orchestrator | Saturday 28 March 2026 00:47:57 +0000 (0:00:00.290) 0:00:17.527 ******** 2026-03-28 01:00:07.117423 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.117431 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.117439 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.117446 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.117452 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.117459 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.117466 | orchestrator | 2026-03-28 01:00:07.117473 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 01:00:07.117479 | orchestrator | Saturday 28 March 2026 00:47:59 +0000 (0:00:01.757) 0:00:19.285 ******** 2026-03-28 01:00:07.117487 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:00:07.117494 | orchestrator | 2026-03-28 01:00:07.117502 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 01:00:07.117510 | orchestrator | Saturday 28 March 2026 00:48:00 +0000 (0:00:00.804) 0:00:20.089 ******** 2026-03-28 01:00:07.117518 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.117526 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.117542 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.117550 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.117557 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.117564 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.117571 | orchestrator | 2026-03-28 01:00:07.117578 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 01:00:07.117585 | orchestrator | Saturday 28 March 2026 00:48:01 +0000 (0:00:01.478) 0:00:21.567 ******** 2026-03-28 01:00:07.117593 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.117599 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.117606 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.117613 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.117620 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.117627 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.117634 | orchestrator | 2026-03-28 01:00:07.117642 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 01:00:07.117649 | orchestrator | Saturday 28 March 2026 00:48:03 +0000 (0:00:01.990) 0:00:23.558 ******** 2026-03-28 01:00:07.117656 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.117664 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.117672 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.117679 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.117687 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.117693 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.117701 | orchestrator | 2026-03-28 01:00:07.117708 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 01:00:07.117714 | orchestrator | Saturday 28 March 2026 00:48:05 +0000 (0:00:01.552) 0:00:25.110 ******** 2026-03-28 01:00:07.117728 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.117736 | orchestrator | 2026-03-28 01:00:07.117743 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 01:00:07.117751 | orchestrator | Saturday 28 March 2026 00:48:05 +0000 (0:00:00.476) 0:00:25.587 ******** 2026-03-28 01:00:07.117758 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.117766 | orchestrator | 2026-03-28 01:00:07.117773 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 01:00:07.117779 | orchestrator | Saturday 28 March 2026 00:48:06 +0000 (0:00:00.550) 0:00:26.137 ******** 2026-03-28 01:00:07.117786 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.117793 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.117800 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.117817 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.117825 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.117834 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.117841 | orchestrator | 2026-03-28 01:00:07.117849 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 01:00:07.117856 | orchestrator | Saturday 28 March 2026 00:48:07 +0000 (0:00:01.034) 0:00:27.172 ******** 2026-03-28 01:00:07.117864 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.117872 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.117880 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.117888 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.117896 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.117904 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.117911 | orchestrator | 2026-03-28 01:00:07.117919 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 01:00:07.117926 | orchestrator | Saturday 28 March 2026 00:48:09 +0000 (0:00:01.874) 0:00:29.046 ******** 2026-03-28 01:00:07.117935 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.117942 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.117950 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.117958 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.118110 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.118128 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.118137 | orchestrator | 2026-03-28 01:00:07.118145 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 01:00:07.118154 | orchestrator | Saturday 28 March 2026 00:48:10 +0000 (0:00:01.585) 0:00:30.632 ******** 2026-03-28 01:00:07.118162 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.118169 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.118195 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.118203 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.118210 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.118218 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.118226 | orchestrator | 2026-03-28 01:00:07.118234 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 01:00:07.118242 | orchestrator | Saturday 28 March 2026 00:48:12 +0000 (0:00:01.476) 0:00:32.109 ******** 2026-03-28 01:00:07.118249 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.118258 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.118266 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.118274 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.118282 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.118289 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.118297 | orchestrator | 2026-03-28 01:00:07.118304 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 01:00:07.118312 | orchestrator | Saturday 28 March 2026 00:48:13 +0000 (0:00:01.598) 0:00:33.707 ******** 2026-03-28 01:00:07.118320 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.118327 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.118334 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.118341 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.118348 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.118356 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.118364 | orchestrator | 2026-03-28 01:00:07.118372 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 01:00:07.118380 | orchestrator | Saturday 28 March 2026 00:48:15 +0000 (0:00:01.481) 0:00:35.189 ******** 2026-03-28 01:00:07.118388 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.118396 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.118404 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.118412 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.118420 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.118427 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.118435 | orchestrator | 2026-03-28 01:00:07.118443 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 01:00:07.118450 | orchestrator | Saturday 28 March 2026 00:48:16 +0000 (0:00:00.908) 0:00:36.097 ******** 2026-03-28 01:00:07.118460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e282229f--a8c2--5daa--9c69--6eb93429113b-osd--block--e282229f--a8c2--5daa--9c69--6eb93429113b', 'dm-uuid-LVM-nG28kqN3mbMtKOhRxNmvwhcmB0RqY3ewIJADuQ1rzsvyry0nnXrQl3TraZcM2dNR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1d415d19--3246--5675--b441--c36cba308c79-osd--block--1d415d19--3246--5675--b441--c36cba308c79', 'dm-uuid-LVM-TAOuoGIQrs87MNf2fFw5tIYBVvLNTD1Jx5dfK7NkPuGSRvrVkpDBjv95LS8LOg4E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de32c164--f4a0--5092--ad33--650515756f9d-osd--block--de32c164--f4a0--5092--ad33--650515756f9d', 'dm-uuid-LVM-jIb0bnEDbAUwmV2OhoIiGBx2S1hRa36gUlCLm4EMrr716UL3t1D0Y9yUv0cYe1k6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--65811f0f--7bf7--557a--9618--106707fc2899-osd--block--65811f0f--7bf7--557a--9618--106707fc2899', 'dm-uuid-LVM-cx6I8OBVjWE0SdizXW559kKB1PJIgnzMhy0AGwh0g3hhQGmCpJafxNwqcsh3yuUL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part1', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part14', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part15', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part16', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.118797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part1', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part14', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part15', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part16', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.118821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--de32c164--f4a0--5092--ad33--650515756f9d-osd--block--de32c164--f4a0--5092--ad33--650515756f9d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tOfppb-TpHr-M3P1-PFHX-OwRx-oSV7-eydvx7', 'scsi-0QEMU_QEMU_HARDDISK_4cb6368c-0066-4efd-8388-81f1557a02ca', 'scsi-SQEMU_QEMU_HARDDISK_4cb6368c-0066-4efd-8388-81f1557a02ca'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.118828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e282229f--a8c2--5daa--9c69--6eb93429113b-osd--block--e282229f--a8c2--5daa--9c69--6eb93429113b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Cb9qxz-e1pg-nAfB-Heaf-oN5a-7YP5-H3nqnD', 'scsi-0QEMU_QEMU_HARDDISK_9560503a-139c-4329-8ffd-1ea1e0c721e5', 'scsi-SQEMU_QEMU_HARDDISK_9560503a-139c-4329-8ffd-1ea1e0c721e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.118833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--65811f0f--7bf7--557a--9618--106707fc2899-osd--block--65811f0f--7bf7--557a--9618--106707fc2899'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Zr99fa-WjG2-7bae-3cH7-1JXW-6pj7-qnDezw', 'scsi-0QEMU_QEMU_HARDDISK_b9aebbdd-9418-41ff-9099-90b7dcb703f9', 'scsi-SQEMU_QEMU_HARDDISK_b9aebbdd-9418-41ff-9099-90b7dcb703f9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.118839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8ddcfbb-f935-4942-af25-8ac280f1cc67', 'scsi-SQEMU_QEMU_HARDDISK_f8ddcfbb-f935-4942-af25-8ac280f1cc67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.118845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.118856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1d415d19--3246--5675--b441--c36cba308c79-osd--block--1d415d19--3246--5675--b441--c36cba308c79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lqMPFY-0IjU-H0fK-3muW-V5dl-YBJ4-LW0Z8v', 'scsi-0QEMU_QEMU_HARDDISK_64213c7d-5962-413c-aa45-2f60eed78f32', 'scsi-SQEMU_QEMU_HARDDISK_64213c7d-5962-413c-aa45-2f60eed78f32'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.118866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_94eace61-73f7-4993-ae2a-02303df71bb3', 'scsi-SQEMU_QEMU_HARDDISK_94eace61-73f7-4993-ae2a-02303df71bb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.118872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8b5a6aab--ec84--598a--adc7--d040a5844549-osd--block--8b5a6aab--ec84--598a--adc7--d040a5844549', 'dm-uuid-LVM-n3x6z0vISm2CJwPGychUi36foVrMCTsVwW5MkFJJ1X5L85t8TBOn3cafSp6hlzA8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.118882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe-osd--block--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe', 'dm-uuid-LVM-N6ATf3p9yGvylFwJ3f26f5zsR7t8BGZ4d6cT08TpBrY41fVjdTeLf0cdulABdWlf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.118940 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.118978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part1', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part14', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part15', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part16', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.118995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8b5a6aab--ec84--598a--adc7--d040a5844549-osd--block--8b5a6aab--ec84--598a--adc7--d040a5844549'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3GzFBH-WypZ-MtIJ-87e6-rfO6-th7u-6qcT8D', 'scsi-0QEMU_QEMU_HARDDISK_d59a946d-61ee-4c80-a151-abde4d1a3094', 'scsi-SQEMU_QEMU_HARDDISK_d59a946d-61ee-4c80-a151-abde4d1a3094'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.119001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe-osd--block--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wLo2a2-D67r-EL0U-1qJK-1pU0-beyk-Ei8JS9', 'scsi-0QEMU_QEMU_HARDDISK_adec6741-41cb-49e2-9389-e6d1302151a0', 'scsi-SQEMU_QEMU_HARDDISK_adec6741-41cb-49e2-9389-e6d1302151a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.119006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e8f6ba-fcdd-41b8-9839-c0061159d97d', 'scsi-SQEMU_QEMU_HARDDISK_86e8f6ba-fcdd-41b8-9839-c0061159d97d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.119011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.119026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3', 'scsi-SQEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part1', 'scsi-SQEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part14', 'scsi-SQEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part15', 'scsi-SQEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part16', 'scsi-SQEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.119094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.119099 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.119104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c', 'scsi-SQEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.119162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.119189 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.119195 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.119199 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.119204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:07.119292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0', 'scsi-SQEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part1', 'scsi-SQEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part14', 'scsi-SQEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part15', 'scsi-SQEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part16', 'scsi-SQEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.119302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:07.119307 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.119312 | orchestrator | 2026-03-28 01:00:07.119317 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 01:00:07.119322 | orchestrator | Saturday 28 March 2026 00:48:18 +0000 (0:00:01.918) 0:00:38.016 ******** 2026-03-28 01:00:07.119327 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e282229f--a8c2--5daa--9c69--6eb93429113b-osd--block--e282229f--a8c2--5daa--9c69--6eb93429113b', 'dm-uuid-LVM-nG28kqN3mbMtKOhRxNmvwhcmB0RqY3ewIJADuQ1rzsvyry0nnXrQl3TraZcM2dNR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119334 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1d415d19--3246--5675--b441--c36cba308c79-osd--block--1d415d19--3246--5675--b441--c36cba308c79', 'dm-uuid-LVM-TAOuoGIQrs87MNf2fFw5tIYBVvLNTD1Jx5dfK7NkPuGSRvrVkpDBjv95LS8LOg4E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119342 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119356 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119364 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119369 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119386 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de32c164--f4a0--5092--ad33--650515756f9d-osd--block--de32c164--f4a0--5092--ad33--650515756f9d', 'dm-uuid-LVM-jIb0bnEDbAUwmV2OhoIiGBx2S1hRa36gUlCLm4EMrr716UL3t1D0Y9yUv0cYe1k6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119392 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--65811f0f--7bf7--557a--9618--106707fc2899-osd--block--65811f0f--7bf7--557a--9618--106707fc2899', 'dm-uuid-LVM-cx6I8OBVjWE0SdizXW559kKB1PJIgnzMhy0AGwh0g3hhQGmCpJafxNwqcsh3yuUL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119400 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119410 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119416 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119421 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119429 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119433 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119438 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119451 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part1', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part14', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part15', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part16', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119466 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119471 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119483 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part1', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part14', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part15', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part16', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--de32c164--f4a0--5092--ad33--650515756f9d-osd--block--de32c164--f4a0--5092--ad33--650515756f9d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tOfppb-TpHr-M3P1-PFHX-OwRx-oSV7-eydvx7', 'scsi-0QEMU_QEMU_HARDDISK_4cb6368c-0066-4efd-8388-81f1557a02ca', 'scsi-SQEMU_QEMU_HARDDISK_4cb6368c-0066-4efd-8388-81f1557a02ca'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e282229f--a8c2--5daa--9c69--6eb93429113b-osd--block--e282229f--a8c2--5daa--9c69--6eb93429113b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Cb9qxz-e1pg-nAfB-Heaf-oN5a-7YP5-H3nqnD', 'scsi-0QEMU_QEMU_HARDDISK_9560503a-139c-4329-8ffd-1ea1e0c721e5', 'scsi-SQEMU_QEMU_HARDDISK_9560503a-139c-4329-8ffd-1ea1e0c721e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119509 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--65811f0f--7bf7--557a--9618--106707fc2899-osd--block--65811f0f--7bf7--557a--9618--106707fc2899'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Zr99fa-WjG2-7bae-3cH7-1JXW-6pj7-qnDezw', 'scsi-0QEMU_QEMU_HARDDISK_b9aebbdd-9418-41ff-9099-90b7dcb703f9', 'scsi-SQEMU_QEMU_HARDDISK_b9aebbdd-9418-41ff-9099-90b7dcb703f9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8ddcfbb-f935-4942-af25-8ac280f1cc67', 'scsi-SQEMU_QEMU_HARDDISK_f8ddcfbb-f935-4942-af25-8ac280f1cc67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119527 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119532 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1d415d19--3246--5675--b441--c36cba308c79-osd--block--1d415d19--3246--5675--b441--c36cba308c79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lqMPFY-0IjU-H0fK-3muW-V5dl-YBJ4-LW0Z8v', 'scsi-0QEMU_QEMU_HARDDISK_64213c7d-5962-413c-aa45-2f60eed78f32', 'scsi-SQEMU_QEMU_HARDDISK_64213c7d-5962-413c-aa45-2f60eed78f32'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_94eace61-73f7-4993-ae2a-02303df71bb3', 'scsi-SQEMU_QEMU_HARDDISK_94eace61-73f7-4993-ae2a-02303df71bb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119614 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119620 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.119625 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8b5a6aab--ec84--598a--adc7--d040a5844549-osd--block--8b5a6aab--ec84--598a--adc7--d040a5844549', 'dm-uuid-LVM-n3x6z0vISm2CJwPGychUi36foVrMCTsVwW5MkFJJ1X5L85t8TBOn3cafSp6hlzA8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119635 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119641 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119647 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe-osd--block--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe', 'dm-uuid-LVM-N6ATf3p9yGvylFwJ3f26f5zsR7t8BGZ4d6cT08TpBrY41fVjdTeLf0cdulABdWlf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119658 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119670 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119690 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.119699 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119707 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119715 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119837 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.119863 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120428 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120481 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120489 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120497 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120505 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120524 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120655 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120678 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120686 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120694 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120701 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120709 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120723 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120745 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120754 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120763 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part1', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part14', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part15', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part16', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120775 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120796 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3', 'scsi-SQEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part1', 'scsi-SQEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part14', 'scsi-SQEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part15', 'scsi-SQEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part16', 'scsi-SQEMU_QEMU_HARDDISK_634fcc3a-1043-40bd-adf5-6b5290b4e5e3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120806 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8b5a6aab--ec84--598a--adc7--d040a5844549-osd--block--8b5a6aab--ec84--598a--adc7--d040a5844549'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3GzFBH-WypZ-MtIJ-87e6-rfO6-th7u-6qcT8D', 'scsi-0QEMU_QEMU_HARDDISK_d59a946d-61ee-4c80-a151-abde4d1a3094', 'scsi-SQEMU_QEMU_HARDDISK_d59a946d-61ee-4c80-a151-abde4d1a3094'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120857 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120881 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe-osd--block--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wLo2a2-D67r-EL0U-1qJK-1pU0-beyk-Ei8JS9', 'scsi-0QEMU_QEMU_HARDDISK_adec6741-41cb-49e2-9389-e6d1302151a0', 'scsi-SQEMU_QEMU_HARDDISK_adec6741-41cb-49e2-9389-e6d1302151a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120890 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0', 'scsi-SQEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part1', 'scsi-SQEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part14', 'scsi-SQEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part15', 'scsi-SQEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part16', 'scsi-SQEMU_QEMU_HARDDISK_163cf866-001f-4e5b-a61a-02887cb0e3f0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120905 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120925 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e8f6ba-fcdd-41b8-9839-c0061159d97d', 'scsi-SQEMU_QEMU_HARDDISK_86e8f6ba-fcdd-41b8-9839-c0061159d97d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120934 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120942 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120950 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.120960 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120966 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.120971 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.120976 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120984 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.120998 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.121004 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.121011 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c', 'scsi-SQEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b91ecb4-57d6-4807-af9e-4fff691df09c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.121024 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:07.121030 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.121035 | orchestrator | 2026-03-28 01:00:07.121044 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 01:00:07.121051 | orchestrator | Saturday 28 March 2026 00:48:19 +0000 (0:00:01.349) 0:00:39.365 ******** 2026-03-28 01:00:07.121056 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.121062 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.121067 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.121073 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.121078 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.121084 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.121090 | orchestrator | 2026-03-28 01:00:07.121095 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 01:00:07.121101 | orchestrator | Saturday 28 March 2026 00:48:21 +0000 (0:00:01.882) 0:00:41.248 ******** 2026-03-28 01:00:07.121106 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.121112 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.121117 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.121123 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.121128 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.121134 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.121139 | orchestrator | 2026-03-28 01:00:07.121145 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 01:00:07.121151 | orchestrator | Saturday 28 March 2026 00:48:22 +0000 (0:00:01.032) 0:00:42.280 ******** 2026-03-28 01:00:07.121156 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.121162 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.121167 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.121225 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.121232 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.121238 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.121243 | orchestrator | 2026-03-28 01:00:07.121249 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 01:00:07.121255 | orchestrator | Saturday 28 March 2026 00:48:24 +0000 (0:00:02.207) 0:00:44.487 ******** 2026-03-28 01:00:07.121260 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.121265 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.121269 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.121274 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.121279 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.121283 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.121288 | orchestrator | 2026-03-28 01:00:07.121293 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 01:00:07.121297 | orchestrator | Saturday 28 March 2026 00:48:25 +0000 (0:00:01.294) 0:00:45.782 ******** 2026-03-28 01:00:07.121302 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.121307 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.121311 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.121316 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.121320 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.121332 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.121336 | orchestrator | 2026-03-28 01:00:07.121341 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 01:00:07.121346 | orchestrator | Saturday 28 March 2026 00:48:27 +0000 (0:00:01.411) 0:00:47.194 ******** 2026-03-28 01:00:07.121351 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.121355 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.121360 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.121364 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.121369 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.121373 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.121420 | orchestrator | 2026-03-28 01:00:07.121425 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 01:00:07.121429 | orchestrator | Saturday 28 March 2026 00:48:28 +0000 (0:00:01.110) 0:00:48.305 ******** 2026-03-28 01:00:07.121434 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-28 01:00:07.121439 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-28 01:00:07.121443 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-28 01:00:07.121448 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-28 01:00:07.121453 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-28 01:00:07.121457 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-28 01:00:07.121462 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 01:00:07.121467 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-28 01:00:07.121471 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 01:00:07.121476 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-28 01:00:07.121480 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 01:00:07.121485 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-28 01:00:07.121489 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-28 01:00:07.121494 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-28 01:00:07.121499 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 01:00:07.121503 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-28 01:00:07.121508 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-28 01:00:07.121512 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 01:00:07.121517 | orchestrator | 2026-03-28 01:00:07.121522 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 01:00:07.121526 | orchestrator | Saturday 28 March 2026 00:48:34 +0000 (0:00:06.347) 0:00:54.653 ******** 2026-03-28 01:00:07.121537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 01:00:07.121542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 01:00:07.121546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 01:00:07.121551 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.121555 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 01:00:07.121560 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 01:00:07.121565 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 01:00:07.121569 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.121574 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 01:00:07.121582 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 01:00:07.121587 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 01:00:07.121591 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.121596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 01:00:07.121600 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 01:00:07.121605 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 01:00:07.121615 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.121619 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 01:00:07.121624 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 01:00:07.121628 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 01:00:07.121633 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.121637 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 01:00:07.121642 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 01:00:07.121646 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 01:00:07.121651 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.121656 | orchestrator | 2026-03-28 01:00:07.121660 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 01:00:07.121665 | orchestrator | Saturday 28 March 2026 00:48:36 +0000 (0:00:01.515) 0:00:56.168 ******** 2026-03-28 01:00:07.121669 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.121674 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.121679 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.121684 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.121688 | orchestrator | 2026-03-28 01:00:07.121693 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 01:00:07.121699 | orchestrator | Saturday 28 March 2026 00:48:38 +0000 (0:00:01.881) 0:00:58.050 ******** 2026-03-28 01:00:07.121704 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.121708 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.121713 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.121718 | orchestrator | 2026-03-28 01:00:07.121722 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 01:00:07.121727 | orchestrator | Saturday 28 March 2026 00:48:38 +0000 (0:00:00.694) 0:00:58.745 ******** 2026-03-28 01:00:07.121731 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.121736 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.121741 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.121745 | orchestrator | 2026-03-28 01:00:07.121750 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 01:00:07.121754 | orchestrator | Saturday 28 March 2026 00:48:39 +0000 (0:00:00.564) 0:00:59.309 ******** 2026-03-28 01:00:07.121759 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.121764 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.121768 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.121773 | orchestrator | 2026-03-28 01:00:07.121777 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 01:00:07.121782 | orchestrator | Saturday 28 March 2026 00:48:40 +0000 (0:00:01.089) 0:01:00.399 ******** 2026-03-28 01:00:07.121787 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.121791 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.121796 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.121800 | orchestrator | 2026-03-28 01:00:07.121805 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 01:00:07.121810 | orchestrator | Saturday 28 March 2026 00:48:41 +0000 (0:00:00.990) 0:01:01.390 ******** 2026-03-28 01:00:07.121814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.121819 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.121823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.121828 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.121832 | orchestrator | 2026-03-28 01:00:07.121837 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 01:00:07.121842 | orchestrator | Saturday 28 March 2026 00:48:42 +0000 (0:00:00.495) 0:01:01.885 ******** 2026-03-28 01:00:07.121846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.121854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.121859 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.121863 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.121867 | orchestrator | 2026-03-28 01:00:07.121872 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 01:00:07.121877 | orchestrator | Saturday 28 March 2026 00:48:42 +0000 (0:00:00.551) 0:01:02.436 ******** 2026-03-28 01:00:07.121881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.121886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.121890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.121895 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.121899 | orchestrator | 2026-03-28 01:00:07.121907 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 01:00:07.121912 | orchestrator | Saturday 28 March 2026 00:48:43 +0000 (0:00:00.706) 0:01:03.143 ******** 2026-03-28 01:00:07.121919 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.121925 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.121933 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.121939 | orchestrator | 2026-03-28 01:00:07.121946 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 01:00:07.121952 | orchestrator | Saturday 28 March 2026 00:48:44 +0000 (0:00:00.662) 0:01:03.806 ******** 2026-03-28 01:00:07.121959 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 01:00:07.121966 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 01:00:07.121977 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 01:00:07.121983 | orchestrator | 2026-03-28 01:00:07.121989 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 01:00:07.121996 | orchestrator | Saturday 28 March 2026 00:48:45 +0000 (0:00:01.748) 0:01:05.554 ******** 2026-03-28 01:00:07.122003 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 01:00:07.122009 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:00:07.122069 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:00:07.122077 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 01:00:07.122086 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 01:00:07.122094 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 01:00:07.122103 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 01:00:07.122110 | orchestrator | 2026-03-28 01:00:07.122119 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 01:00:07.122127 | orchestrator | Saturday 28 March 2026 00:48:47 +0000 (0:00:01.299) 0:01:06.853 ******** 2026-03-28 01:00:07.122134 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 01:00:07.122142 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:00:07.122151 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:00:07.122158 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 01:00:07.122166 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 01:00:07.122187 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 01:00:07.122195 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 01:00:07.122201 | orchestrator | 2026-03-28 01:00:07.122209 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 01:00:07.122216 | orchestrator | Saturday 28 March 2026 00:48:50 +0000 (0:00:03.439) 0:01:10.292 ******** 2026-03-28 01:00:07.122232 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.122241 | orchestrator | 2026-03-28 01:00:07.122248 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 01:00:07.122255 | orchestrator | Saturday 28 March 2026 00:48:52 +0000 (0:00:02.040) 0:01:12.333 ******** 2026-03-28 01:00:07.122262 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.122269 | orchestrator | 2026-03-28 01:00:07.122276 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 01:00:07.122282 | orchestrator | Saturday 28 March 2026 00:48:54 +0000 (0:00:01.694) 0:01:14.028 ******** 2026-03-28 01:00:07.122289 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.122296 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.122303 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.122310 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.122317 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.122323 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.122330 | orchestrator | 2026-03-28 01:00:07.122337 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 01:00:07.122345 | orchestrator | Saturday 28 March 2026 00:48:56 +0000 (0:00:02.763) 0:01:16.792 ******** 2026-03-28 01:00:07.122352 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.122359 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.122367 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.122374 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.122381 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.122389 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.122396 | orchestrator | 2026-03-28 01:00:07.122404 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 01:00:07.122411 | orchestrator | Saturday 28 March 2026 00:48:59 +0000 (0:00:02.277) 0:01:19.070 ******** 2026-03-28 01:00:07.122416 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.122421 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.122425 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.122430 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.122435 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.122439 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.122444 | orchestrator | 2026-03-28 01:00:07.122448 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 01:00:07.122453 | orchestrator | Saturday 28 March 2026 00:49:01 +0000 (0:00:01.771) 0:01:20.842 ******** 2026-03-28 01:00:07.122458 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.122467 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.122472 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.122476 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.122481 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.122485 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.122490 | orchestrator | 2026-03-28 01:00:07.122494 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 01:00:07.122499 | orchestrator | Saturday 28 March 2026 00:49:02 +0000 (0:00:01.549) 0:01:22.392 ******** 2026-03-28 01:00:07.122504 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.122508 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.122513 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.122517 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.122522 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.122538 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.122543 | orchestrator | 2026-03-28 01:00:07.122548 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 01:00:07.122553 | orchestrator | Saturday 28 March 2026 00:49:04 +0000 (0:00:01.887) 0:01:24.279 ******** 2026-03-28 01:00:07.122563 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.122568 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.122572 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.122577 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.122581 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.122586 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.122590 | orchestrator | 2026-03-28 01:00:07.122595 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 01:00:07.122600 | orchestrator | Saturday 28 March 2026 00:49:05 +0000 (0:00:01.194) 0:01:25.473 ******** 2026-03-28 01:00:07.122604 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.122609 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.122614 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.122618 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.122623 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.122627 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.122632 | orchestrator | 2026-03-28 01:00:07.122637 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 01:00:07.122641 | orchestrator | Saturday 28 March 2026 00:49:06 +0000 (0:00:01.202) 0:01:26.676 ******** 2026-03-28 01:00:07.122646 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.122651 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.122655 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.122660 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.122664 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.122669 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.122674 | orchestrator | 2026-03-28 01:00:07.122678 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 01:00:07.122683 | orchestrator | Saturday 28 March 2026 00:49:08 +0000 (0:00:01.806) 0:01:28.483 ******** 2026-03-28 01:00:07.122687 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.122692 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.122696 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.122701 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.122705 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.122710 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.122715 | orchestrator | 2026-03-28 01:00:07.122719 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 01:00:07.122724 | orchestrator | Saturday 28 March 2026 00:49:10 +0000 (0:00:01.815) 0:01:30.298 ******** 2026-03-28 01:00:07.122728 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.122733 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.122738 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.122742 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.122747 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.122752 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.122756 | orchestrator | 2026-03-28 01:00:07.122761 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 01:00:07.122766 | orchestrator | Saturday 28 March 2026 00:49:12 +0000 (0:00:01.536) 0:01:31.835 ******** 2026-03-28 01:00:07.122770 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.122775 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.122779 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.122784 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.122788 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.122793 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.122798 | orchestrator | 2026-03-28 01:00:07.122802 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 01:00:07.122807 | orchestrator | Saturday 28 March 2026 00:49:13 +0000 (0:00:01.862) 0:01:33.697 ******** 2026-03-28 01:00:07.122812 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.122816 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.122821 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.122825 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.122838 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.122843 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.122847 | orchestrator | 2026-03-28 01:00:07.122852 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 01:00:07.122857 | orchestrator | Saturday 28 March 2026 00:49:15 +0000 (0:00:01.372) 0:01:35.070 ******** 2026-03-28 01:00:07.122861 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.122866 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.122870 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.122875 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.122880 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.122884 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.122889 | orchestrator | 2026-03-28 01:00:07.122893 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 01:00:07.122898 | orchestrator | Saturday 28 March 2026 00:49:16 +0000 (0:00:01.595) 0:01:36.665 ******** 2026-03-28 01:00:07.122903 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.122908 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.122912 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.122917 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.122921 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.122926 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.122931 | orchestrator | 2026-03-28 01:00:07.122935 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 01:00:07.122940 | orchestrator | Saturday 28 March 2026 00:49:17 +0000 (0:00:01.028) 0:01:37.694 ******** 2026-03-28 01:00:07.122948 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.122953 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.122957 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.122962 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.122967 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.122971 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.122976 | orchestrator | 2026-03-28 01:00:07.122981 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 01:00:07.122985 | orchestrator | Saturday 28 March 2026 00:49:19 +0000 (0:00:01.462) 0:01:39.156 ******** 2026-03-28 01:00:07.122990 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.122995 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.122999 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.123004 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.123013 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.123018 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.123022 | orchestrator | 2026-03-28 01:00:07.123027 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 01:00:07.123032 | orchestrator | Saturday 28 March 2026 00:49:20 +0000 (0:00:00.860) 0:01:40.017 ******** 2026-03-28 01:00:07.123036 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.123041 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.123046 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.123050 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.123055 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.123059 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.123064 | orchestrator | 2026-03-28 01:00:07.123069 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 01:00:07.123073 | orchestrator | Saturday 28 March 2026 00:49:21 +0000 (0:00:01.623) 0:01:41.641 ******** 2026-03-28 01:00:07.123078 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.123083 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.123088 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.123093 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.123097 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.123102 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.123106 | orchestrator | 2026-03-28 01:00:07.123111 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 01:00:07.123120 | orchestrator | Saturday 28 March 2026 00:49:22 +0000 (0:00:00.907) 0:01:42.548 ******** 2026-03-28 01:00:07.123124 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.123129 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.123134 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.123139 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.123143 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.123148 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.123153 | orchestrator | 2026-03-28 01:00:07.123157 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 01:00:07.123162 | orchestrator | Saturday 28 March 2026 00:49:24 +0000 (0:00:01.391) 0:01:43.940 ******** 2026-03-28 01:00:07.123166 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.123185 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.123191 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.123195 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.123200 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.123205 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.123210 | orchestrator | 2026-03-28 01:00:07.123214 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 01:00:07.123219 | orchestrator | Saturday 28 March 2026 00:49:26 +0000 (0:00:01.999) 0:01:45.940 ******** 2026-03-28 01:00:07.123224 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.123228 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.123233 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.123237 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.123242 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.123246 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.123251 | orchestrator | 2026-03-28 01:00:07.123256 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 01:00:07.123260 | orchestrator | Saturday 28 March 2026 00:49:28 +0000 (0:00:02.709) 0:01:48.649 ******** 2026-03-28 01:00:07.123265 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.123270 | orchestrator | 2026-03-28 01:00:07.123275 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 01:00:07.123279 | orchestrator | Saturday 28 March 2026 00:49:30 +0000 (0:00:01.387) 0:01:50.037 ******** 2026-03-28 01:00:07.123284 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.123288 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.123293 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.123298 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.123302 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.123307 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.123311 | orchestrator | 2026-03-28 01:00:07.123316 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 01:00:07.123321 | orchestrator | Saturday 28 March 2026 00:49:30 +0000 (0:00:00.692) 0:01:50.729 ******** 2026-03-28 01:00:07.123325 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.123330 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.123334 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.123339 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.123343 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.123348 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.123353 | orchestrator | 2026-03-28 01:00:07.123357 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 01:00:07.123362 | orchestrator | Saturday 28 March 2026 00:49:31 +0000 (0:00:00.741) 0:01:51.470 ******** 2026-03-28 01:00:07.123367 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 01:00:07.123371 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 01:00:07.123376 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 01:00:07.123386 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 01:00:07.123394 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 01:00:07.123399 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 01:00:07.123404 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 01:00:07.123408 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 01:00:07.123413 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 01:00:07.123418 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 01:00:07.123426 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 01:00:07.123431 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 01:00:07.123435 | orchestrator | 2026-03-28 01:00:07.123440 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 01:00:07.123445 | orchestrator | Saturday 28 March 2026 00:49:32 +0000 (0:00:01.150) 0:01:52.621 ******** 2026-03-28 01:00:07.123449 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.123454 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.123458 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.123463 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.123468 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.123472 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.123478 | orchestrator | 2026-03-28 01:00:07.123482 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 01:00:07.123487 | orchestrator | Saturday 28 March 2026 00:49:33 +0000 (0:00:01.143) 0:01:53.764 ******** 2026-03-28 01:00:07.123491 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.123496 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.123501 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.123506 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.123510 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.123515 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.123519 | orchestrator | 2026-03-28 01:00:07.123524 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 01:00:07.123529 | orchestrator | Saturday 28 March 2026 00:49:34 +0000 (0:00:00.613) 0:01:54.378 ******** 2026-03-28 01:00:07.123533 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.123538 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.123543 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.123548 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.123552 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.123557 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.123561 | orchestrator | 2026-03-28 01:00:07.123566 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 01:00:07.123570 | orchestrator | Saturday 28 March 2026 00:49:35 +0000 (0:00:00.944) 0:01:55.322 ******** 2026-03-28 01:00:07.123575 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.123579 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.123584 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.123588 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.123593 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.123598 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.123602 | orchestrator | 2026-03-28 01:00:07.123607 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 01:00:07.123611 | orchestrator | Saturday 28 March 2026 00:49:36 +0000 (0:00:00.715) 0:01:56.037 ******** 2026-03-28 01:00:07.123616 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.123626 | orchestrator | 2026-03-28 01:00:07.123631 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 01:00:07.123635 | orchestrator | Saturday 28 March 2026 00:49:37 +0000 (0:00:01.350) 0:01:57.387 ******** 2026-03-28 01:00:07.123640 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.123645 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.123649 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.123654 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.123658 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.123663 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.123668 | orchestrator | 2026-03-28 01:00:07.123672 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 01:00:07.123677 | orchestrator | Saturday 28 March 2026 00:50:33 +0000 (0:00:55.876) 0:02:53.265 ******** 2026-03-28 01:00:07.123682 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 01:00:07.123686 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 01:00:07.123691 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 01:00:07.123696 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.123701 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 01:00:07.123705 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 01:00:07.123710 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 01:00:07.123715 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.123719 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 01:00:07.123724 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 01:00:07.123728 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 01:00:07.123733 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.123738 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 01:00:07.123746 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 01:00:07.123750 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 01:00:07.123755 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.123760 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 01:00:07.123764 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 01:00:07.123769 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 01:00:07.123773 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.123781 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 01:00:07.123786 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 01:00:07.123791 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 01:00:07.123795 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.123800 | orchestrator | 2026-03-28 01:00:07.123805 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 01:00:07.123809 | orchestrator | Saturday 28 March 2026 00:50:34 +0000 (0:00:01.140) 0:02:54.405 ******** 2026-03-28 01:00:07.123814 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.123819 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.123824 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.123828 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.123833 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.123837 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.123842 | orchestrator | 2026-03-28 01:00:07.123847 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 01:00:07.123856 | orchestrator | Saturday 28 March 2026 00:50:35 +0000 (0:00:01.270) 0:02:55.676 ******** 2026-03-28 01:00:07.123861 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.123865 | orchestrator | 2026-03-28 01:00:07.123870 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 01:00:07.123875 | orchestrator | Saturday 28 March 2026 00:50:36 +0000 (0:00:00.155) 0:02:55.832 ******** 2026-03-28 01:00:07.123879 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.123884 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.123888 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.123893 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.123897 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.123902 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.123907 | orchestrator | 2026-03-28 01:00:07.123911 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 01:00:07.123916 | orchestrator | Saturday 28 March 2026 00:50:36 +0000 (0:00:00.829) 0:02:56.662 ******** 2026-03-28 01:00:07.123921 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.123926 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.123930 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.123935 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.123939 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.123944 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.123951 | orchestrator | 2026-03-28 01:00:07.123959 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 01:00:07.123967 | orchestrator | Saturday 28 March 2026 00:50:37 +0000 (0:00:01.081) 0:02:57.743 ******** 2026-03-28 01:00:07.123974 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.123981 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.123988 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.123995 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.124003 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.124009 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.124015 | orchestrator | 2026-03-28 01:00:07.124022 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 01:00:07.124029 | orchestrator | Saturday 28 March 2026 00:50:38 +0000 (0:00:00.900) 0:02:58.644 ******** 2026-03-28 01:00:07.124036 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.124044 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.124050 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.124057 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.124064 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.124070 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.124077 | orchestrator | 2026-03-28 01:00:07.124083 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 01:00:07.124091 | orchestrator | Saturday 28 March 2026 00:50:42 +0000 (0:00:03.497) 0:03:02.141 ******** 2026-03-28 01:00:07.124097 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.124104 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.124110 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.124116 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.124123 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.124129 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.124136 | orchestrator | 2026-03-28 01:00:07.124143 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 01:00:07.124151 | orchestrator | Saturday 28 March 2026 00:50:43 +0000 (0:00:00.772) 0:03:02.914 ******** 2026-03-28 01:00:07.124159 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.124167 | orchestrator | 2026-03-28 01:00:07.124219 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 01:00:07.124227 | orchestrator | Saturday 28 March 2026 00:50:44 +0000 (0:00:01.746) 0:03:04.660 ******** 2026-03-28 01:00:07.124246 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.124252 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.124259 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.124266 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.124273 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.124280 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.124288 | orchestrator | 2026-03-28 01:00:07.124295 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 01:00:07.124308 | orchestrator | Saturday 28 March 2026 00:50:45 +0000 (0:00:00.935) 0:03:05.595 ******** 2026-03-28 01:00:07.124315 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.124322 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.124329 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.124336 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.124342 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.124348 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.124355 | orchestrator | 2026-03-28 01:00:07.124361 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 01:00:07.124368 | orchestrator | Saturday 28 March 2026 00:50:46 +0000 (0:00:00.632) 0:03:06.227 ******** 2026-03-28 01:00:07.124375 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.124383 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.124399 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.124406 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.124413 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.124420 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.124427 | orchestrator | 2026-03-28 01:00:07.124434 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 01:00:07.124442 | orchestrator | Saturday 28 March 2026 00:50:47 +0000 (0:00:00.938) 0:03:07.166 ******** 2026-03-28 01:00:07.124449 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.124458 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.124465 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.124472 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.124480 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.124487 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.124495 | orchestrator | 2026-03-28 01:00:07.124501 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 01:00:07.124509 | orchestrator | Saturday 28 March 2026 00:50:48 +0000 (0:00:00.702) 0:03:07.869 ******** 2026-03-28 01:00:07.124516 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.124523 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.124532 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.124537 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.124541 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.124546 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.124551 | orchestrator | 2026-03-28 01:00:07.124555 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 01:00:07.124560 | orchestrator | Saturday 28 March 2026 00:50:49 +0000 (0:00:00.943) 0:03:08.812 ******** 2026-03-28 01:00:07.124565 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.124569 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.124574 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.124579 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.124583 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.124588 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.124592 | orchestrator | 2026-03-28 01:00:07.124597 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 01:00:07.124602 | orchestrator | Saturday 28 March 2026 00:50:49 +0000 (0:00:00.878) 0:03:09.691 ******** 2026-03-28 01:00:07.124606 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.124611 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.124622 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.124627 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.124632 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.124636 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.124641 | orchestrator | 2026-03-28 01:00:07.124646 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 01:00:07.124650 | orchestrator | Saturday 28 March 2026 00:50:51 +0000 (0:00:01.436) 0:03:11.128 ******** 2026-03-28 01:00:07.124655 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.124659 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.124664 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.124669 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.124673 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.124678 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.124682 | orchestrator | 2026-03-28 01:00:07.124687 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 01:00:07.124692 | orchestrator | Saturday 28 March 2026 00:50:52 +0000 (0:00:01.061) 0:03:12.189 ******** 2026-03-28 01:00:07.124697 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.124701 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.124706 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.124711 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.124715 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.124720 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.124724 | orchestrator | 2026-03-28 01:00:07.124729 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 01:00:07.124734 | orchestrator | Saturday 28 March 2026 00:50:54 +0000 (0:00:01.815) 0:03:14.005 ******** 2026-03-28 01:00:07.124740 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.124744 | orchestrator | 2026-03-28 01:00:07.124749 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 01:00:07.124754 | orchestrator | Saturday 28 March 2026 00:50:56 +0000 (0:00:01.998) 0:03:16.004 ******** 2026-03-28 01:00:07.124759 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-28 01:00:07.124764 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-28 01:00:07.124769 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-28 01:00:07.124773 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-28 01:00:07.124778 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-28 01:00:07.124783 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-28 01:00:07.124787 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-28 01:00:07.124792 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-28 01:00:07.124797 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-28 01:00:07.124801 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-28 01:00:07.124811 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-28 01:00:07.124815 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-28 01:00:07.124820 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-28 01:00:07.124825 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-28 01:00:07.124829 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-28 01:00:07.124834 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-28 01:00:07.124838 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-28 01:00:07.124843 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-28 01:00:07.124852 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-28 01:00:07.124856 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-28 01:00:07.124861 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-28 01:00:07.124870 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-28 01:00:07.124875 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-28 01:00:07.124880 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-28 01:00:07.124884 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-28 01:00:07.124889 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-28 01:00:07.124894 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-28 01:00:07.124899 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-28 01:00:07.124903 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-28 01:00:07.124908 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-28 01:00:07.124913 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-28 01:00:07.124918 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-28 01:00:07.124922 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-28 01:00:07.124927 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-28 01:00:07.124931 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-28 01:00:07.124936 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-28 01:00:07.124941 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-28 01:00:07.124946 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-28 01:00:07.124951 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-28 01:00:07.124955 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-28 01:00:07.124960 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-28 01:00:07.124965 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-28 01:00:07.124969 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-28 01:00:07.124974 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 01:00:07.124979 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-28 01:00:07.124983 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-28 01:00:07.124988 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 01:00:07.124992 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-28 01:00:07.124997 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 01:00:07.125002 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-28 01:00:07.125006 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-28 01:00:07.125011 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 01:00:07.125015 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 01:00:07.125020 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 01:00:07.125025 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 01:00:07.125029 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 01:00:07.125034 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 01:00:07.125039 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 01:00:07.125043 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 01:00:07.125048 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 01:00:07.125053 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 01:00:07.125057 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 01:00:07.125066 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 01:00:07.125071 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 01:00:07.125075 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 01:00:07.125080 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 01:00:07.125085 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 01:00:07.125089 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 01:00:07.125094 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 01:00:07.125099 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 01:00:07.125106 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 01:00:07.125111 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 01:00:07.125116 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 01:00:07.125121 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 01:00:07.125125 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 01:00:07.125130 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 01:00:07.125138 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 01:00:07.125142 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 01:00:07.125147 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 01:00:07.125152 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 01:00:07.125156 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-28 01:00:07.125161 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 01:00:07.125166 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-28 01:00:07.125170 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-28 01:00:07.125198 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 01:00:07.125204 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 01:00:07.125208 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 01:00:07.125213 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-28 01:00:07.125217 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-28 01:00:07.125222 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-28 01:00:07.125227 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-28 01:00:07.125231 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-28 01:00:07.125236 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-28 01:00:07.125241 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-28 01:00:07.125245 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-28 01:00:07.125250 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-28 01:00:07.125254 | orchestrator | 2026-03-28 01:00:07.125259 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 01:00:07.125264 | orchestrator | Saturday 28 March 2026 00:51:03 +0000 (0:00:07.568) 0:03:23.572 ******** 2026-03-28 01:00:07.125268 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125273 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.125277 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.125283 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.125289 | orchestrator | 2026-03-28 01:00:07.125293 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-28 01:00:07.125302 | orchestrator | Saturday 28 March 2026 00:51:05 +0000 (0:00:01.605) 0:03:25.178 ******** 2026-03-28 01:00:07.125307 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.125312 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.125317 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.125322 | orchestrator | 2026-03-28 01:00:07.125326 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-28 01:00:07.125331 | orchestrator | Saturday 28 March 2026 00:51:06 +0000 (0:00:01.130) 0:03:26.308 ******** 2026-03-28 01:00:07.125336 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.125341 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.125345 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.125350 | orchestrator | 2026-03-28 01:00:07.125355 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 01:00:07.125359 | orchestrator | Saturday 28 March 2026 00:51:08 +0000 (0:00:01.693) 0:03:28.002 ******** 2026-03-28 01:00:07.125364 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.125369 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.125373 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.125378 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125382 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.125387 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.125391 | orchestrator | 2026-03-28 01:00:07.125396 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 01:00:07.125401 | orchestrator | Saturday 28 March 2026 00:51:08 +0000 (0:00:00.637) 0:03:28.640 ******** 2026-03-28 01:00:07.125406 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.125410 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.125415 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.125419 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125428 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.125432 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.125437 | orchestrator | 2026-03-28 01:00:07.125442 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 01:00:07.125446 | orchestrator | Saturday 28 March 2026 00:51:09 +0000 (0:00:01.071) 0:03:29.711 ******** 2026-03-28 01:00:07.125451 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.125456 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.125460 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.125465 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125469 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.125474 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.125478 | orchestrator | 2026-03-28 01:00:07.125487 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 01:00:07.125491 | orchestrator | Saturday 28 March 2026 00:51:10 +0000 (0:00:00.941) 0:03:30.653 ******** 2026-03-28 01:00:07.125496 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.125501 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.125505 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.125510 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125515 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.125519 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.125524 | orchestrator | 2026-03-28 01:00:07.125529 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 01:00:07.125537 | orchestrator | Saturday 28 March 2026 00:51:12 +0000 (0:00:01.237) 0:03:31.890 ******** 2026-03-28 01:00:07.125542 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.125547 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.125551 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.125556 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125561 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.125565 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.125570 | orchestrator | 2026-03-28 01:00:07.125575 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 01:00:07.125580 | orchestrator | Saturday 28 March 2026 00:51:13 +0000 (0:00:01.125) 0:03:33.016 ******** 2026-03-28 01:00:07.125584 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.125589 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.125593 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.125598 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125603 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.125607 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.125612 | orchestrator | 2026-03-28 01:00:07.125617 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 01:00:07.125621 | orchestrator | Saturday 28 March 2026 00:51:14 +0000 (0:00:01.295) 0:03:34.312 ******** 2026-03-28 01:00:07.125626 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.125631 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.125636 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.125640 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125645 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.125650 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.125654 | orchestrator | 2026-03-28 01:00:07.125659 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 01:00:07.125664 | orchestrator | Saturday 28 March 2026 00:51:15 +0000 (0:00:00.855) 0:03:35.167 ******** 2026-03-28 01:00:07.125668 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.125673 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.125678 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.125682 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125687 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.125691 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.125696 | orchestrator | 2026-03-28 01:00:07.125701 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 01:00:07.125705 | orchestrator | Saturday 28 March 2026 00:51:16 +0000 (0:00:01.227) 0:03:36.395 ******** 2026-03-28 01:00:07.125710 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125718 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.125725 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.125732 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.125740 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.125747 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.125754 | orchestrator | 2026-03-28 01:00:07.125760 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 01:00:07.125766 | orchestrator | Saturday 28 March 2026 00:51:19 +0000 (0:00:03.040) 0:03:39.436 ******** 2026-03-28 01:00:07.125772 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.125779 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.125786 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.125793 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125800 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.125807 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.125813 | orchestrator | 2026-03-28 01:00:07.125819 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 01:00:07.125826 | orchestrator | Saturday 28 March 2026 00:51:20 +0000 (0:00:00.905) 0:03:40.341 ******** 2026-03-28 01:00:07.125839 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.125845 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.125852 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.125859 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125866 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.125873 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.125879 | orchestrator | 2026-03-28 01:00:07.125886 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 01:00:07.125893 | orchestrator | Saturday 28 March 2026 00:51:21 +0000 (0:00:00.732) 0:03:41.074 ******** 2026-03-28 01:00:07.125901 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.125907 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.125913 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.125919 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125926 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.125932 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.125940 | orchestrator | 2026-03-28 01:00:07.125947 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 01:00:07.125958 | orchestrator | Saturday 28 March 2026 00:51:22 +0000 (0:00:00.907) 0:03:41.981 ******** 2026-03-28 01:00:07.125965 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.125972 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.125979 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.125985 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.125998 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.126006 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.126056 | orchestrator | 2026-03-28 01:00:07.126068 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 01:00:07.126075 | orchestrator | Saturday 28 March 2026 00:51:22 +0000 (0:00:00.656) 0:03:42.638 ******** 2026-03-28 01:00:07.126086 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-28 01:00:07.126096 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-28 01:00:07.126106 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.126113 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-28 01:00:07.126121 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-28 01:00:07.126129 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.126137 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-28 01:00:07.126152 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-28 01:00:07.126159 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.126166 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.126224 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.126232 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.126240 | orchestrator | 2026-03-28 01:00:07.126247 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 01:00:07.126254 | orchestrator | Saturday 28 March 2026 00:51:23 +0000 (0:00:01.002) 0:03:43.640 ******** 2026-03-28 01:00:07.126262 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.126269 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.126276 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.126283 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.126291 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.126299 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.126307 | orchestrator | 2026-03-28 01:00:07.126314 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 01:00:07.126321 | orchestrator | Saturday 28 March 2026 00:51:24 +0000 (0:00:00.660) 0:03:44.301 ******** 2026-03-28 01:00:07.126329 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.126336 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.126343 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.126351 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.126359 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.126366 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.126374 | orchestrator | 2026-03-28 01:00:07.126383 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 01:00:07.126390 | orchestrator | Saturday 28 March 2026 00:51:25 +0000 (0:00:00.913) 0:03:45.214 ******** 2026-03-28 01:00:07.126397 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.126403 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.126410 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.126417 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.126424 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.126438 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.126446 | orchestrator | 2026-03-28 01:00:07.126453 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 01:00:07.126461 | orchestrator | Saturday 28 March 2026 00:51:26 +0000 (0:00:00.787) 0:03:46.001 ******** 2026-03-28 01:00:07.126468 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.126475 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.126482 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.126489 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.126496 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.126503 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.126510 | orchestrator | 2026-03-28 01:00:07.126518 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 01:00:07.126542 | orchestrator | Saturday 28 March 2026 00:51:27 +0000 (0:00:00.910) 0:03:46.912 ******** 2026-03-28 01:00:07.126549 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.126556 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.126563 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.126570 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.126577 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.126584 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.126591 | orchestrator | 2026-03-28 01:00:07.126598 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 01:00:07.126612 | orchestrator | Saturday 28 March 2026 00:51:27 +0000 (0:00:00.817) 0:03:47.729 ******** 2026-03-28 01:00:07.126619 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.126626 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.126633 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.126640 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.126647 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.126654 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.126661 | orchestrator | 2026-03-28 01:00:07.126668 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 01:00:07.126675 | orchestrator | Saturday 28 March 2026 00:51:29 +0000 (0:00:01.582) 0:03:49.311 ******** 2026-03-28 01:00:07.126683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.126690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.126697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.126705 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.126712 | orchestrator | 2026-03-28 01:00:07.126719 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 01:00:07.126727 | orchestrator | Saturday 28 March 2026 00:51:29 +0000 (0:00:00.475) 0:03:49.787 ******** 2026-03-28 01:00:07.126734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.126742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.126749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.126757 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.126764 | orchestrator | 2026-03-28 01:00:07.126770 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 01:00:07.126777 | orchestrator | Saturday 28 March 2026 00:51:30 +0000 (0:00:00.617) 0:03:50.405 ******** 2026-03-28 01:00:07.126785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.126792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.126799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.126805 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.126812 | orchestrator | 2026-03-28 01:00:07.126819 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 01:00:07.126826 | orchestrator | Saturday 28 March 2026 00:51:31 +0000 (0:00:00.434) 0:03:50.840 ******** 2026-03-28 01:00:07.126833 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.126841 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.126848 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.126855 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.126862 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.126870 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.126877 | orchestrator | 2026-03-28 01:00:07.126884 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 01:00:07.126892 | orchestrator | Saturday 28 March 2026 00:51:31 +0000 (0:00:00.730) 0:03:51.570 ******** 2026-03-28 01:00:07.126900 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 01:00:07.126907 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 01:00:07.126915 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-28 01:00:07.126922 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 01:00:07.126929 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.126936 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-28 01:00:07.126944 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.126951 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-28 01:00:07.126959 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.126966 | orchestrator | 2026-03-28 01:00:07.126974 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 01:00:07.126981 | orchestrator | Saturday 28 March 2026 00:51:34 +0000 (0:00:02.617) 0:03:54.188 ******** 2026-03-28 01:00:07.127000 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.127008 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.127015 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.127023 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.127030 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.127038 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.127044 | orchestrator | 2026-03-28 01:00:07.127051 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 01:00:07.127059 | orchestrator | Saturday 28 March 2026 00:51:37 +0000 (0:00:03.554) 0:03:57.742 ******** 2026-03-28 01:00:07.127066 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.127073 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.127080 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.127087 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.127094 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.127102 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.127109 | orchestrator | 2026-03-28 01:00:07.127121 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 01:00:07.127129 | orchestrator | Saturday 28 March 2026 00:51:39 +0000 (0:00:01.457) 0:03:59.200 ******** 2026-03-28 01:00:07.127136 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127144 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.127152 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.127159 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.127167 | orchestrator | 2026-03-28 01:00:07.127193 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-28 01:00:07.127208 | orchestrator | Saturday 28 March 2026 00:51:40 +0000 (0:00:01.287) 0:04:00.488 ******** 2026-03-28 01:00:07.127217 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.127224 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.127231 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.127238 | orchestrator | 2026-03-28 01:00:07.127245 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-28 01:00:07.127253 | orchestrator | Saturday 28 March 2026 00:51:41 +0000 (0:00:00.400) 0:04:00.889 ******** 2026-03-28 01:00:07.127260 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.127267 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.127274 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.127282 | orchestrator | 2026-03-28 01:00:07.127290 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-28 01:00:07.127297 | orchestrator | Saturday 28 March 2026 00:51:42 +0000 (0:00:01.646) 0:04:02.535 ******** 2026-03-28 01:00:07.127305 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 01:00:07.127313 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 01:00:07.127320 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 01:00:07.127328 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.127336 | orchestrator | 2026-03-28 01:00:07.127343 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-28 01:00:07.127351 | orchestrator | Saturday 28 March 2026 00:51:43 +0000 (0:00:00.778) 0:04:03.314 ******** 2026-03-28 01:00:07.127359 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.127366 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.127373 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.127381 | orchestrator | 2026-03-28 01:00:07.127387 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 01:00:07.127396 | orchestrator | Saturday 28 March 2026 00:51:44 +0000 (0:00:00.486) 0:04:03.800 ******** 2026-03-28 01:00:07.127403 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.127411 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.127417 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.127424 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.127440 | orchestrator | 2026-03-28 01:00:07.127448 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-28 01:00:07.127454 | orchestrator | Saturday 28 March 2026 00:51:45 +0000 (0:00:01.153) 0:04:04.954 ******** 2026-03-28 01:00:07.127461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.127467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.127475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.127482 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127491 | orchestrator | 2026-03-28 01:00:07.127499 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-28 01:00:07.127506 | orchestrator | Saturday 28 March 2026 00:51:45 +0000 (0:00:00.546) 0:04:05.501 ******** 2026-03-28 01:00:07.127514 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127522 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.127530 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.127537 | orchestrator | 2026-03-28 01:00:07.127545 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-28 01:00:07.127553 | orchestrator | Saturday 28 March 2026 00:51:46 +0000 (0:00:00.383) 0:04:05.884 ******** 2026-03-28 01:00:07.127560 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127568 | orchestrator | 2026-03-28 01:00:07.127576 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-28 01:00:07.127583 | orchestrator | Saturday 28 March 2026 00:51:46 +0000 (0:00:00.245) 0:04:06.129 ******** 2026-03-28 01:00:07.127590 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127597 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.127605 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.127613 | orchestrator | 2026-03-28 01:00:07.127620 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-28 01:00:07.127628 | orchestrator | Saturday 28 March 2026 00:51:46 +0000 (0:00:00.335) 0:04:06.465 ******** 2026-03-28 01:00:07.127635 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127642 | orchestrator | 2026-03-28 01:00:07.127649 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-28 01:00:07.127656 | orchestrator | Saturday 28 March 2026 00:51:46 +0000 (0:00:00.244) 0:04:06.709 ******** 2026-03-28 01:00:07.127663 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127671 | orchestrator | 2026-03-28 01:00:07.127677 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-28 01:00:07.127684 | orchestrator | Saturday 28 March 2026 00:51:47 +0000 (0:00:00.241) 0:04:06.950 ******** 2026-03-28 01:00:07.127692 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127699 | orchestrator | 2026-03-28 01:00:07.127707 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-28 01:00:07.127715 | orchestrator | Saturday 28 March 2026 00:51:47 +0000 (0:00:00.185) 0:04:07.136 ******** 2026-03-28 01:00:07.127723 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127731 | orchestrator | 2026-03-28 01:00:07.127739 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-28 01:00:07.127746 | orchestrator | Saturday 28 March 2026 00:51:48 +0000 (0:00:00.788) 0:04:07.925 ******** 2026-03-28 01:00:07.127754 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127761 | orchestrator | 2026-03-28 01:00:07.127774 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-28 01:00:07.127781 | orchestrator | Saturday 28 March 2026 00:51:48 +0000 (0:00:00.251) 0:04:08.176 ******** 2026-03-28 01:00:07.127788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.127795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.127802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.127809 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127817 | orchestrator | 2026-03-28 01:00:07.127831 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-28 01:00:07.127845 | orchestrator | Saturday 28 March 2026 00:51:48 +0000 (0:00:00.507) 0:04:08.684 ******** 2026-03-28 01:00:07.127853 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127861 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.127869 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.127876 | orchestrator | 2026-03-28 01:00:07.127883 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-28 01:00:07.127890 | orchestrator | Saturday 28 March 2026 00:51:49 +0000 (0:00:00.405) 0:04:09.089 ******** 2026-03-28 01:00:07.127898 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127905 | orchestrator | 2026-03-28 01:00:07.127912 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-28 01:00:07.127919 | orchestrator | Saturday 28 March 2026 00:51:49 +0000 (0:00:00.208) 0:04:09.298 ******** 2026-03-28 01:00:07.127926 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.127933 | orchestrator | 2026-03-28 01:00:07.127940 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 01:00:07.127948 | orchestrator | Saturday 28 March 2026 00:51:49 +0000 (0:00:00.237) 0:04:09.535 ******** 2026-03-28 01:00:07.127955 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.127962 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.127969 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.127977 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.127984 | orchestrator | 2026-03-28 01:00:07.127991 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-28 01:00:07.127999 | orchestrator | Saturday 28 March 2026 00:51:50 +0000 (0:00:01.212) 0:04:10.748 ******** 2026-03-28 01:00:07.128006 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.128013 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.128021 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.128029 | orchestrator | 2026-03-28 01:00:07.128036 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-28 01:00:07.128043 | orchestrator | Saturday 28 March 2026 00:51:51 +0000 (0:00:00.368) 0:04:11.117 ******** 2026-03-28 01:00:07.128050 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.128057 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.128064 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.128071 | orchestrator | 2026-03-28 01:00:07.128079 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-28 01:00:07.128086 | orchestrator | Saturday 28 March 2026 00:51:52 +0000 (0:00:01.419) 0:04:12.537 ******** 2026-03-28 01:00:07.128094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.128101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.128109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.128116 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.128124 | orchestrator | 2026-03-28 01:00:07.128131 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-28 01:00:07.128139 | orchestrator | Saturday 28 March 2026 00:51:53 +0000 (0:00:00.965) 0:04:13.503 ******** 2026-03-28 01:00:07.128147 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.128154 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.128162 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.128169 | orchestrator | 2026-03-28 01:00:07.128195 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 01:00:07.128203 | orchestrator | Saturday 28 March 2026 00:51:54 +0000 (0:00:00.636) 0:04:14.139 ******** 2026-03-28 01:00:07.128210 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.128217 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.128224 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.128232 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.128247 | orchestrator | 2026-03-28 01:00:07.128255 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-28 01:00:07.128263 | orchestrator | Saturday 28 March 2026 00:51:55 +0000 (0:00:00.989) 0:04:15.128 ******** 2026-03-28 01:00:07.128270 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.128278 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.128285 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.128292 | orchestrator | 2026-03-28 01:00:07.128299 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-28 01:00:07.128306 | orchestrator | Saturday 28 March 2026 00:51:55 +0000 (0:00:00.632) 0:04:15.761 ******** 2026-03-28 01:00:07.128314 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.128321 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.128328 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.128336 | orchestrator | 2026-03-28 01:00:07.128343 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-28 01:00:07.128351 | orchestrator | Saturday 28 March 2026 00:51:57 +0000 (0:00:01.546) 0:04:17.308 ******** 2026-03-28 01:00:07.128359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.128366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.128373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.128380 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.128386 | orchestrator | 2026-03-28 01:00:07.128393 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-28 01:00:07.128405 | orchestrator | Saturday 28 March 2026 00:51:58 +0000 (0:00:00.729) 0:04:18.037 ******** 2026-03-28 01:00:07.128412 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.128419 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.128426 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.128433 | orchestrator | 2026-03-28 01:00:07.128440 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-28 01:00:07.128448 | orchestrator | Saturday 28 March 2026 00:51:58 +0000 (0:00:00.470) 0:04:18.508 ******** 2026-03-28 01:00:07.128455 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.128463 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.128471 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.128478 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.128485 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.128499 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.128506 | orchestrator | 2026-03-28 01:00:07.128513 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 01:00:07.128520 | orchestrator | Saturday 28 March 2026 00:51:59 +0000 (0:00:00.988) 0:04:19.496 ******** 2026-03-28 01:00:07.128527 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.128534 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.128542 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.128549 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.128556 | orchestrator | 2026-03-28 01:00:07.128563 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-28 01:00:07.128571 | orchestrator | Saturday 28 March 2026 00:52:00 +0000 (0:00:01.060) 0:04:20.557 ******** 2026-03-28 01:00:07.128578 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.128585 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.128593 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.128600 | orchestrator | 2026-03-28 01:00:07.128607 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-28 01:00:07.128615 | orchestrator | Saturday 28 March 2026 00:52:01 +0000 (0:00:00.601) 0:04:21.159 ******** 2026-03-28 01:00:07.128622 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.128629 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.128637 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.128651 | orchestrator | 2026-03-28 01:00:07.128658 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-28 01:00:07.128666 | orchestrator | Saturday 28 March 2026 00:52:03 +0000 (0:00:01.649) 0:04:22.809 ******** 2026-03-28 01:00:07.128674 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 01:00:07.128682 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 01:00:07.128689 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 01:00:07.128696 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.128703 | orchestrator | 2026-03-28 01:00:07.128710 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-28 01:00:07.128718 | orchestrator | Saturday 28 March 2026 00:52:04 +0000 (0:00:01.223) 0:04:24.032 ******** 2026-03-28 01:00:07.128725 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.128732 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.128740 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.128747 | orchestrator | 2026-03-28 01:00:07.128754 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-28 01:00:07.128762 | orchestrator | 2026-03-28 01:00:07.128769 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 01:00:07.128777 | orchestrator | Saturday 28 March 2026 00:52:04 +0000 (0:00:00.597) 0:04:24.630 ******** 2026-03-28 01:00:07.128786 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.128794 | orchestrator | 2026-03-28 01:00:07.128802 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 01:00:07.128809 | orchestrator | Saturday 28 March 2026 00:52:05 +0000 (0:00:00.885) 0:04:25.516 ******** 2026-03-28 01:00:07.128817 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.128825 | orchestrator | 2026-03-28 01:00:07.128832 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 01:00:07.128839 | orchestrator | Saturday 28 March 2026 00:52:06 +0000 (0:00:00.569) 0:04:26.085 ******** 2026-03-28 01:00:07.128846 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.128853 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.128861 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.128868 | orchestrator | 2026-03-28 01:00:07.128875 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 01:00:07.128882 | orchestrator | Saturday 28 March 2026 00:52:07 +0000 (0:00:01.325) 0:04:27.411 ******** 2026-03-28 01:00:07.128889 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.128896 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.128904 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.128911 | orchestrator | 2026-03-28 01:00:07.128918 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 01:00:07.128925 | orchestrator | Saturday 28 March 2026 00:52:08 +0000 (0:00:00.452) 0:04:27.863 ******** 2026-03-28 01:00:07.128932 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.128940 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.128947 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.128954 | orchestrator | 2026-03-28 01:00:07.128962 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 01:00:07.128969 | orchestrator | Saturday 28 March 2026 00:52:08 +0000 (0:00:00.398) 0:04:28.261 ******** 2026-03-28 01:00:07.128977 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.128985 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.128992 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.128999 | orchestrator | 2026-03-28 01:00:07.129007 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 01:00:07.129015 | orchestrator | Saturday 28 March 2026 00:52:08 +0000 (0:00:00.354) 0:04:28.615 ******** 2026-03-28 01:00:07.129022 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.129040 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.129048 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.129055 | orchestrator | 2026-03-28 01:00:07.129063 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 01:00:07.129070 | orchestrator | Saturday 28 March 2026 00:52:10 +0000 (0:00:01.205) 0:04:29.821 ******** 2026-03-28 01:00:07.129078 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.129086 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.129093 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.129101 | orchestrator | 2026-03-28 01:00:07.129108 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 01:00:07.129116 | orchestrator | Saturday 28 March 2026 00:52:10 +0000 (0:00:00.535) 0:04:30.357 ******** 2026-03-28 01:00:07.129130 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.129138 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.129145 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.129152 | orchestrator | 2026-03-28 01:00:07.129160 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 01:00:07.129168 | orchestrator | Saturday 28 March 2026 00:52:10 +0000 (0:00:00.382) 0:04:30.739 ******** 2026-03-28 01:00:07.129228 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.129236 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.129243 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.129251 | orchestrator | 2026-03-28 01:00:07.129258 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 01:00:07.129265 | orchestrator | Saturday 28 March 2026 00:52:11 +0000 (0:00:00.985) 0:04:31.725 ******** 2026-03-28 01:00:07.129273 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.129281 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.129288 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.129295 | orchestrator | 2026-03-28 01:00:07.129302 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 01:00:07.129310 | orchestrator | Saturday 28 March 2026 00:52:12 +0000 (0:00:01.011) 0:04:32.736 ******** 2026-03-28 01:00:07.129317 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.129325 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.129332 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.129340 | orchestrator | 2026-03-28 01:00:07.129348 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 01:00:07.129355 | orchestrator | Saturday 28 March 2026 00:52:13 +0000 (0:00:00.425) 0:04:33.162 ******** 2026-03-28 01:00:07.129362 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.129370 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.129377 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.129384 | orchestrator | 2026-03-28 01:00:07.129391 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 01:00:07.129399 | orchestrator | Saturday 28 March 2026 00:52:13 +0000 (0:00:00.502) 0:04:33.664 ******** 2026-03-28 01:00:07.129406 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.129414 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.129421 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.129429 | orchestrator | 2026-03-28 01:00:07.129436 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 01:00:07.129443 | orchestrator | Saturday 28 March 2026 00:52:14 +0000 (0:00:00.376) 0:04:34.041 ******** 2026-03-28 01:00:07.129450 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.129457 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.129465 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.129473 | orchestrator | 2026-03-28 01:00:07.129481 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 01:00:07.129489 | orchestrator | Saturday 28 March 2026 00:52:14 +0000 (0:00:00.393) 0:04:34.434 ******** 2026-03-28 01:00:07.129496 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.129503 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.129510 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.129525 | orchestrator | 2026-03-28 01:00:07.129533 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 01:00:07.129540 | orchestrator | Saturday 28 March 2026 00:52:15 +0000 (0:00:00.589) 0:04:35.024 ******** 2026-03-28 01:00:07.129547 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.129555 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.129562 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.129570 | orchestrator | 2026-03-28 01:00:07.129577 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 01:00:07.129584 | orchestrator | Saturday 28 March 2026 00:52:15 +0000 (0:00:00.494) 0:04:35.518 ******** 2026-03-28 01:00:07.129592 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.129599 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.129607 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.129614 | orchestrator | 2026-03-28 01:00:07.129621 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 01:00:07.129629 | orchestrator | Saturday 28 March 2026 00:52:16 +0000 (0:00:00.818) 0:04:36.337 ******** 2026-03-28 01:00:07.129635 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.129642 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.129650 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.129657 | orchestrator | 2026-03-28 01:00:07.129664 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 01:00:07.129671 | orchestrator | Saturday 28 March 2026 00:52:17 +0000 (0:00:00.655) 0:04:36.992 ******** 2026-03-28 01:00:07.129679 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.129688 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.129695 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.129703 | orchestrator | 2026-03-28 01:00:07.129710 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 01:00:07.129717 | orchestrator | Saturday 28 March 2026 00:52:18 +0000 (0:00:00.912) 0:04:37.904 ******** 2026-03-28 01:00:07.129724 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.129731 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.129738 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.129746 | orchestrator | 2026-03-28 01:00:07.129753 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-28 01:00:07.129761 | orchestrator | Saturday 28 March 2026 00:52:18 +0000 (0:00:00.790) 0:04:38.695 ******** 2026-03-28 01:00:07.129768 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.129776 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.129783 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.129791 | orchestrator | 2026-03-28 01:00:07.129803 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-28 01:00:07.129810 | orchestrator | Saturday 28 March 2026 00:52:19 +0000 (0:00:00.605) 0:04:39.300 ******** 2026-03-28 01:00:07.129817 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.129824 | orchestrator | 2026-03-28 01:00:07.129831 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-28 01:00:07.129838 | orchestrator | Saturday 28 March 2026 00:52:20 +0000 (0:00:01.070) 0:04:40.371 ******** 2026-03-28 01:00:07.129844 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.129851 | orchestrator | 2026-03-28 01:00:07.129865 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-28 01:00:07.129871 | orchestrator | Saturday 28 March 2026 00:52:20 +0000 (0:00:00.174) 0:04:40.546 ******** 2026-03-28 01:00:07.129878 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 01:00:07.129885 | orchestrator | 2026-03-28 01:00:07.129892 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-28 01:00:07.129898 | orchestrator | Saturday 28 March 2026 00:52:21 +0000 (0:00:01.071) 0:04:41.618 ******** 2026-03-28 01:00:07.129905 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.129912 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.129924 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.129931 | orchestrator | 2026-03-28 01:00:07.129938 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-28 01:00:07.129945 | orchestrator | Saturday 28 March 2026 00:52:22 +0000 (0:00:00.977) 0:04:42.595 ******** 2026-03-28 01:00:07.129951 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.129958 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.129965 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.129971 | orchestrator | 2026-03-28 01:00:07.129978 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-28 01:00:07.129984 | orchestrator | Saturday 28 March 2026 00:52:23 +0000 (0:00:00.996) 0:04:43.592 ******** 2026-03-28 01:00:07.129992 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.129998 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.130005 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.130011 | orchestrator | 2026-03-28 01:00:07.130126 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-28 01:00:07.130134 | orchestrator | Saturday 28 March 2026 00:52:26 +0000 (0:00:02.453) 0:04:46.045 ******** 2026-03-28 01:00:07.130141 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.130148 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.130154 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.130160 | orchestrator | 2026-03-28 01:00:07.130166 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-28 01:00:07.130192 | orchestrator | Saturday 28 March 2026 00:52:28 +0000 (0:00:02.149) 0:04:48.195 ******** 2026-03-28 01:00:07.130199 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.130205 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.130211 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.130217 | orchestrator | 2026-03-28 01:00:07.130224 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-28 01:00:07.130231 | orchestrator | Saturday 28 March 2026 00:52:29 +0000 (0:00:00.954) 0:04:49.149 ******** 2026-03-28 01:00:07.130237 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.130243 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.130250 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.130257 | orchestrator | 2026-03-28 01:00:07.130263 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-28 01:00:07.130269 | orchestrator | Saturday 28 March 2026 00:52:31 +0000 (0:00:01.729) 0:04:50.879 ******** 2026-03-28 01:00:07.130276 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.130283 | orchestrator | 2026-03-28 01:00:07.130290 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-28 01:00:07.130296 | orchestrator | Saturday 28 March 2026 00:52:32 +0000 (0:00:01.724) 0:04:52.603 ******** 2026-03-28 01:00:07.130303 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.130309 | orchestrator | 2026-03-28 01:00:07.130315 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-28 01:00:07.130322 | orchestrator | Saturday 28 March 2026 00:52:33 +0000 (0:00:00.683) 0:04:53.287 ******** 2026-03-28 01:00:07.130328 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:00:07.130334 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:07.130341 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:07.130348 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:00:07.130354 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-28 01:00:07.130361 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:00:07.130367 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:00:07.130374 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-28 01:00:07.130380 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:00:07.130387 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-28 01:00:07.130401 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-28 01:00:07.130408 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-28 01:00:07.130415 | orchestrator | 2026-03-28 01:00:07.130422 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-28 01:00:07.130428 | orchestrator | Saturday 28 March 2026 00:52:36 +0000 (0:00:03.355) 0:04:56.643 ******** 2026-03-28 01:00:07.130435 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.130442 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.130448 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.130455 | orchestrator | 2026-03-28 01:00:07.130462 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-28 01:00:07.130469 | orchestrator | Saturday 28 March 2026 00:52:38 +0000 (0:00:01.440) 0:04:58.084 ******** 2026-03-28 01:00:07.130476 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.130488 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.130496 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.130502 | orchestrator | 2026-03-28 01:00:07.130508 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-28 01:00:07.130515 | orchestrator | Saturday 28 March 2026 00:52:38 +0000 (0:00:00.417) 0:04:58.502 ******** 2026-03-28 01:00:07.130521 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.130528 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.130534 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.130540 | orchestrator | 2026-03-28 01:00:07.130547 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-28 01:00:07.130553 | orchestrator | Saturday 28 March 2026 00:52:39 +0000 (0:00:01.000) 0:04:59.503 ******** 2026-03-28 01:00:07.130559 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.130597 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.130604 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.130611 | orchestrator | 2026-03-28 01:00:07.130618 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-28 01:00:07.130625 | orchestrator | Saturday 28 March 2026 00:52:42 +0000 (0:00:02.364) 0:05:01.868 ******** 2026-03-28 01:00:07.130632 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.130639 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.130646 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.130653 | orchestrator | 2026-03-28 01:00:07.130660 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-28 01:00:07.130668 | orchestrator | Saturday 28 March 2026 00:52:43 +0000 (0:00:01.609) 0:05:03.477 ******** 2026-03-28 01:00:07.130675 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.130682 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.130689 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.130695 | orchestrator | 2026-03-28 01:00:07.130702 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-28 01:00:07.130709 | orchestrator | Saturday 28 March 2026 00:52:44 +0000 (0:00:00.519) 0:05:03.997 ******** 2026-03-28 01:00:07.130715 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-28 01:00:07.130723 | orchestrator | 2026-03-28 01:00:07.130729 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-28 01:00:07.130736 | orchestrator | Saturday 28 March 2026 00:52:45 +0000 (0:00:01.118) 0:05:05.116 ******** 2026-03-28 01:00:07.130743 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.130750 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.130757 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.130764 | orchestrator | 2026-03-28 01:00:07.130771 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-28 01:00:07.130778 | orchestrator | Saturday 28 March 2026 00:52:45 +0000 (0:00:00.506) 0:05:05.623 ******** 2026-03-28 01:00:07.130785 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.130793 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.130800 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.130817 | orchestrator | 2026-03-28 01:00:07.130825 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-28 01:00:07.130832 | orchestrator | Saturday 28 March 2026 00:52:46 +0000 (0:00:00.420) 0:05:06.045 ******** 2026-03-28 01:00:07.130838 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.130845 | orchestrator | 2026-03-28 01:00:07.130851 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-28 01:00:07.130858 | orchestrator | Saturday 28 March 2026 00:52:47 +0000 (0:00:01.044) 0:05:07.090 ******** 2026-03-28 01:00:07.130864 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.130872 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.130878 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.130883 | orchestrator | 2026-03-28 01:00:07.130890 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-28 01:00:07.130897 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:02.020) 0:05:09.110 ******** 2026-03-28 01:00:07.130903 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.130909 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.130915 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.130921 | orchestrator | 2026-03-28 01:00:07.130928 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-28 01:00:07.130935 | orchestrator | Saturday 28 March 2026 00:52:50 +0000 (0:00:01.280) 0:05:10.391 ******** 2026-03-28 01:00:07.130941 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.130947 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.130953 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.130959 | orchestrator | 2026-03-28 01:00:07.130965 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-28 01:00:07.130971 | orchestrator | Saturday 28 March 2026 00:52:52 +0000 (0:00:01.897) 0:05:12.289 ******** 2026-03-28 01:00:07.130977 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.130984 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.130990 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.130996 | orchestrator | 2026-03-28 01:00:07.131003 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-28 01:00:07.131009 | orchestrator | Saturday 28 March 2026 00:52:54 +0000 (0:00:02.315) 0:05:14.605 ******** 2026-03-28 01:00:07.131016 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.131023 | orchestrator | 2026-03-28 01:00:07.131029 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-28 01:00:07.131035 | orchestrator | Saturday 28 March 2026 00:52:55 +0000 (0:00:00.669) 0:05:15.274 ******** 2026-03-28 01:00:07.131042 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-28 01:00:07.131048 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.131055 | orchestrator | 2026-03-28 01:00:07.131062 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-28 01:00:07.131075 | orchestrator | Saturday 28 March 2026 00:53:17 +0000 (0:00:21.938) 0:05:37.212 ******** 2026-03-28 01:00:07.131082 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.131089 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.131095 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.131101 | orchestrator | 2026-03-28 01:00:07.131108 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-28 01:00:07.131115 | orchestrator | Saturday 28 March 2026 00:53:28 +0000 (0:00:10.997) 0:05:48.210 ******** 2026-03-28 01:00:07.131121 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.131128 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.131135 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.131142 | orchestrator | 2026-03-28 01:00:07.131149 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-28 01:00:07.131259 | orchestrator | Saturday 28 March 2026 00:53:29 +0000 (0:00:00.619) 0:05:48.829 ******** 2026-03-28 01:00:07.131273 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d70c5b0934377a0c95ca5625d9bd0ea2f0709c97'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-28 01:00:07.131282 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d70c5b0934377a0c95ca5625d9bd0ea2f0709c97'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-28 01:00:07.131290 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d70c5b0934377a0c95ca5625d9bd0ea2f0709c97'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-28 01:00:07.131299 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d70c5b0934377a0c95ca5625d9bd0ea2f0709c97'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-28 01:00:07.131306 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d70c5b0934377a0c95ca5625d9bd0ea2f0709c97'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-28 01:00:07.131314 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d70c5b0934377a0c95ca5625d9bd0ea2f0709c97'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d70c5b0934377a0c95ca5625d9bd0ea2f0709c97'}])  2026-03-28 01:00:07.131323 | orchestrator | 2026-03-28 01:00:07.131329 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 01:00:07.131335 | orchestrator | Saturday 28 March 2026 00:53:44 +0000 (0:00:15.508) 0:06:04.338 ******** 2026-03-28 01:00:07.131342 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.131348 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.131354 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.131362 | orchestrator | 2026-03-28 01:00:07.131368 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 01:00:07.131375 | orchestrator | Saturday 28 March 2026 00:53:44 +0000 (0:00:00.381) 0:06:04.719 ******** 2026-03-28 01:00:07.131382 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.131389 | orchestrator | 2026-03-28 01:00:07.131396 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-28 01:00:07.131403 | orchestrator | Saturday 28 March 2026 00:53:45 +0000 (0:00:00.870) 0:06:05.589 ******** 2026-03-28 01:00:07.131410 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.131417 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.131424 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.131431 | orchestrator | 2026-03-28 01:00:07.131437 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-28 01:00:07.131444 | orchestrator | Saturday 28 March 2026 00:53:46 +0000 (0:00:00.341) 0:06:05.931 ******** 2026-03-28 01:00:07.131458 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.131464 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.131471 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.131477 | orchestrator | 2026-03-28 01:00:07.131483 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-28 01:00:07.131496 | orchestrator | Saturday 28 March 2026 00:53:46 +0000 (0:00:00.385) 0:06:06.316 ******** 2026-03-28 01:00:07.131503 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 01:00:07.131509 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 01:00:07.131516 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 01:00:07.131523 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.131530 | orchestrator | 2026-03-28 01:00:07.131537 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-28 01:00:07.131544 | orchestrator | Saturday 28 March 2026 00:53:47 +0000 (0:00:00.981) 0:06:07.298 ******** 2026-03-28 01:00:07.131551 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.131558 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.131592 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.131600 | orchestrator | 2026-03-28 01:00:07.131607 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-28 01:00:07.131613 | orchestrator | 2026-03-28 01:00:07.131620 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 01:00:07.131627 | orchestrator | Saturday 28 March 2026 00:53:48 +0000 (0:00:00.914) 0:06:08.212 ******** 2026-03-28 01:00:07.131634 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.131641 | orchestrator | 2026-03-28 01:00:07.131648 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 01:00:07.131654 | orchestrator | Saturday 28 March 2026 00:53:48 +0000 (0:00:00.519) 0:06:08.732 ******** 2026-03-28 01:00:07.131660 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.131666 | orchestrator | 2026-03-28 01:00:07.131673 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 01:00:07.131679 | orchestrator | Saturday 28 March 2026 00:53:49 +0000 (0:00:00.816) 0:06:09.549 ******** 2026-03-28 01:00:07.131686 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.131692 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.131698 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.131705 | orchestrator | 2026-03-28 01:00:07.131711 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 01:00:07.131717 | orchestrator | Saturday 28 March 2026 00:53:50 +0000 (0:00:00.803) 0:06:10.352 ******** 2026-03-28 01:00:07.131724 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.131730 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.131736 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.131742 | orchestrator | 2026-03-28 01:00:07.131749 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 01:00:07.131756 | orchestrator | Saturday 28 March 2026 00:53:50 +0000 (0:00:00.337) 0:06:10.690 ******** 2026-03-28 01:00:07.131762 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.131768 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.131774 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.131779 | orchestrator | 2026-03-28 01:00:07.131785 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 01:00:07.131791 | orchestrator | Saturday 28 March 2026 00:53:51 +0000 (0:00:00.602) 0:06:11.293 ******** 2026-03-28 01:00:07.131797 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.131802 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.131807 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.131813 | orchestrator | 2026-03-28 01:00:07.131826 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 01:00:07.131831 | orchestrator | Saturday 28 March 2026 00:53:51 +0000 (0:00:00.341) 0:06:11.634 ******** 2026-03-28 01:00:07.131838 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.131843 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.131849 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.131855 | orchestrator | 2026-03-28 01:00:07.131860 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 01:00:07.131867 | orchestrator | Saturday 28 March 2026 00:53:52 +0000 (0:00:00.742) 0:06:12.377 ******** 2026-03-28 01:00:07.131872 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.131877 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.131882 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.131888 | orchestrator | 2026-03-28 01:00:07.131893 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 01:00:07.131899 | orchestrator | Saturday 28 March 2026 00:53:52 +0000 (0:00:00.338) 0:06:12.716 ******** 2026-03-28 01:00:07.131905 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.131910 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.131916 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.131922 | orchestrator | 2026-03-28 01:00:07.131927 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 01:00:07.131933 | orchestrator | Saturday 28 March 2026 00:53:53 +0000 (0:00:00.597) 0:06:13.313 ******** 2026-03-28 01:00:07.131939 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.131944 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.131949 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.131955 | orchestrator | 2026-03-28 01:00:07.131961 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 01:00:07.131967 | orchestrator | Saturday 28 March 2026 00:53:54 +0000 (0:00:00.782) 0:06:14.095 ******** 2026-03-28 01:00:07.131972 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.131977 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.131983 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.131988 | orchestrator | 2026-03-28 01:00:07.131994 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 01:00:07.131999 | orchestrator | Saturday 28 March 2026 00:53:55 +0000 (0:00:00.757) 0:06:14.853 ******** 2026-03-28 01:00:07.132005 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.132011 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.132017 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.132023 | orchestrator | 2026-03-28 01:00:07.132028 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 01:00:07.132034 | orchestrator | Saturday 28 March 2026 00:53:55 +0000 (0:00:00.309) 0:06:15.162 ******** 2026-03-28 01:00:07.132048 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.132055 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.132061 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.132066 | orchestrator | 2026-03-28 01:00:07.132072 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 01:00:07.132077 | orchestrator | Saturday 28 March 2026 00:53:56 +0000 (0:00:00.642) 0:06:15.805 ******** 2026-03-28 01:00:07.132082 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.132088 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.132093 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.132098 | orchestrator | 2026-03-28 01:00:07.132104 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 01:00:07.132146 | orchestrator | Saturday 28 March 2026 00:53:56 +0000 (0:00:00.345) 0:06:16.150 ******** 2026-03-28 01:00:07.132155 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.132161 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.132166 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.132191 | orchestrator | 2026-03-28 01:00:07.132198 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 01:00:07.132213 | orchestrator | Saturday 28 March 2026 00:53:56 +0000 (0:00:00.324) 0:06:16.474 ******** 2026-03-28 01:00:07.132220 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.132226 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.132232 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.132237 | orchestrator | 2026-03-28 01:00:07.132243 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 01:00:07.132248 | orchestrator | Saturday 28 March 2026 00:53:57 +0000 (0:00:00.335) 0:06:16.809 ******** 2026-03-28 01:00:07.132253 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.132259 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.132266 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.132271 | orchestrator | 2026-03-28 01:00:07.132277 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 01:00:07.132283 | orchestrator | Saturday 28 March 2026 00:53:57 +0000 (0:00:00.361) 0:06:17.170 ******** 2026-03-28 01:00:07.132289 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.132294 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.132300 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.132305 | orchestrator | 2026-03-28 01:00:07.132311 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 01:00:07.132317 | orchestrator | Saturday 28 March 2026 00:53:58 +0000 (0:00:00.625) 0:06:17.796 ******** 2026-03-28 01:00:07.132323 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.132329 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.132335 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.132340 | orchestrator | 2026-03-28 01:00:07.132346 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 01:00:07.132352 | orchestrator | Saturday 28 March 2026 00:53:58 +0000 (0:00:00.406) 0:06:18.202 ******** 2026-03-28 01:00:07.132357 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.132363 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.132368 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.132374 | orchestrator | 2026-03-28 01:00:07.132379 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 01:00:07.132385 | orchestrator | Saturday 28 March 2026 00:53:58 +0000 (0:00:00.429) 0:06:18.632 ******** 2026-03-28 01:00:07.132391 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.132397 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.132403 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.132409 | orchestrator | 2026-03-28 01:00:07.132415 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-28 01:00:07.132421 | orchestrator | Saturday 28 March 2026 00:53:59 +0000 (0:00:00.955) 0:06:19.587 ******** 2026-03-28 01:00:07.132428 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 01:00:07.132434 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:00:07.132440 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:00:07.132445 | orchestrator | 2026-03-28 01:00:07.132451 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-28 01:00:07.132457 | orchestrator | Saturday 28 March 2026 00:54:00 +0000 (0:00:00.719) 0:06:20.307 ******** 2026-03-28 01:00:07.132462 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.132468 | orchestrator | 2026-03-28 01:00:07.132475 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-28 01:00:07.132481 | orchestrator | Saturday 28 March 2026 00:54:01 +0000 (0:00:00.602) 0:06:20.910 ******** 2026-03-28 01:00:07.132486 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.132492 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.132498 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.132504 | orchestrator | 2026-03-28 01:00:07.132510 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-28 01:00:07.132516 | orchestrator | Saturday 28 March 2026 00:54:02 +0000 (0:00:01.086) 0:06:21.996 ******** 2026-03-28 01:00:07.132527 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.132533 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.132539 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.132545 | orchestrator | 2026-03-28 01:00:07.132551 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-28 01:00:07.132557 | orchestrator | Saturday 28 March 2026 00:54:02 +0000 (0:00:00.618) 0:06:22.614 ******** 2026-03-28 01:00:07.132564 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:00:07.132570 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:00:07.132575 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:00:07.132581 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-28 01:00:07.132587 | orchestrator | 2026-03-28 01:00:07.132593 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-28 01:00:07.132598 | orchestrator | Saturday 28 March 2026 00:54:13 +0000 (0:00:10.816) 0:06:33.430 ******** 2026-03-28 01:00:07.132604 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.132610 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.132621 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.132626 | orchestrator | 2026-03-28 01:00:07.132632 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-28 01:00:07.132639 | orchestrator | Saturday 28 March 2026 00:54:14 +0000 (0:00:00.509) 0:06:33.939 ******** 2026-03-28 01:00:07.132645 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 01:00:07.132650 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 01:00:07.132656 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 01:00:07.132662 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-28 01:00:07.132668 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:07.132703 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:07.132709 | orchestrator | 2026-03-28 01:00:07.132715 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-28 01:00:07.132721 | orchestrator | Saturday 28 March 2026 00:54:16 +0000 (0:00:02.628) 0:06:36.568 ******** 2026-03-28 01:00:07.132727 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 01:00:07.132733 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 01:00:07.132739 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 01:00:07.132745 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-28 01:00:07.132751 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:00:07.132757 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-28 01:00:07.132763 | orchestrator | 2026-03-28 01:00:07.132769 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-28 01:00:07.132774 | orchestrator | Saturday 28 March 2026 00:54:18 +0000 (0:00:01.546) 0:06:38.114 ******** 2026-03-28 01:00:07.132780 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.132786 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.132791 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.132798 | orchestrator | 2026-03-28 01:00:07.132803 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-28 01:00:07.132809 | orchestrator | Saturday 28 March 2026 00:54:19 +0000 (0:00:01.134) 0:06:39.249 ******** 2026-03-28 01:00:07.132815 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.132821 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.132826 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.132832 | orchestrator | 2026-03-28 01:00:07.132838 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-28 01:00:07.132845 | orchestrator | Saturday 28 March 2026 00:54:19 +0000 (0:00:00.315) 0:06:39.565 ******** 2026-03-28 01:00:07.132851 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.132857 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.132869 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.132935 | orchestrator | 2026-03-28 01:00:07.132941 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-28 01:00:07.132947 | orchestrator | Saturday 28 March 2026 00:54:20 +0000 (0:00:00.319) 0:06:39.885 ******** 2026-03-28 01:00:07.132953 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.132959 | orchestrator | 2026-03-28 01:00:07.132966 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-28 01:00:07.132972 | orchestrator | Saturday 28 March 2026 00:54:20 +0000 (0:00:00.856) 0:06:40.741 ******** 2026-03-28 01:00:07.132978 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.132984 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.132990 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.132996 | orchestrator | 2026-03-28 01:00:07.133002 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-28 01:00:07.133008 | orchestrator | Saturday 28 March 2026 00:54:21 +0000 (0:00:00.297) 0:06:41.039 ******** 2026-03-28 01:00:07.133013 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.133020 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.133026 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.133031 | orchestrator | 2026-03-28 01:00:07.133037 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-28 01:00:07.133044 | orchestrator | Saturday 28 March 2026 00:54:21 +0000 (0:00:00.276) 0:06:41.316 ******** 2026-03-28 01:00:07.133050 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.133057 | orchestrator | 2026-03-28 01:00:07.133063 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-28 01:00:07.133068 | orchestrator | Saturday 28 March 2026 00:54:22 +0000 (0:00:00.620) 0:06:41.936 ******** 2026-03-28 01:00:07.133074 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.133080 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.133086 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.133092 | orchestrator | 2026-03-28 01:00:07.133098 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-28 01:00:07.133104 | orchestrator | Saturday 28 March 2026 00:54:23 +0000 (0:00:01.270) 0:06:43.207 ******** 2026-03-28 01:00:07.133109 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.133115 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.133121 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.133128 | orchestrator | 2026-03-28 01:00:07.133133 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-28 01:00:07.133139 | orchestrator | Saturday 28 March 2026 00:54:24 +0000 (0:00:01.222) 0:06:44.430 ******** 2026-03-28 01:00:07.133145 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.133151 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.133157 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.133162 | orchestrator | 2026-03-28 01:00:07.133168 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-28 01:00:07.133224 | orchestrator | Saturday 28 March 2026 00:54:26 +0000 (0:00:01.811) 0:06:46.241 ******** 2026-03-28 01:00:07.133231 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.133237 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.133243 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.133249 | orchestrator | 2026-03-28 01:00:07.133261 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-28 01:00:07.133266 | orchestrator | Saturday 28 March 2026 00:54:28 +0000 (0:00:02.302) 0:06:48.543 ******** 2026-03-28 01:00:07.133272 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.133277 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.133283 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-28 01:00:07.133289 | orchestrator | 2026-03-28 01:00:07.133302 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-28 01:00:07.133308 | orchestrator | Saturday 28 March 2026 00:54:29 +0000 (0:00:00.553) 0:06:49.096 ******** 2026-03-28 01:00:07.133358 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-28 01:00:07.133366 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-28 01:00:07.133372 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-28 01:00:07.133378 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-28 01:00:07.133383 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-28 01:00:07.133389 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:00:07.133396 | orchestrator | 2026-03-28 01:00:07.133402 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-28 01:00:07.133408 | orchestrator | Saturday 28 March 2026 00:54:59 +0000 (0:00:30.232) 0:07:19.328 ******** 2026-03-28 01:00:07.133414 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:00:07.133419 | orchestrator | 2026-03-28 01:00:07.133425 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-28 01:00:07.133431 | orchestrator | Saturday 28 March 2026 00:55:00 +0000 (0:00:01.324) 0:07:20.653 ******** 2026-03-28 01:00:07.133436 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.133442 | orchestrator | 2026-03-28 01:00:07.133448 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-28 01:00:07.133453 | orchestrator | Saturday 28 March 2026 00:55:01 +0000 (0:00:00.367) 0:07:21.021 ******** 2026-03-28 01:00:07.133459 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.133464 | orchestrator | 2026-03-28 01:00:07.133469 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-28 01:00:07.133475 | orchestrator | Saturday 28 March 2026 00:55:01 +0000 (0:00:00.159) 0:07:21.181 ******** 2026-03-28 01:00:07.133481 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-28 01:00:07.133486 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-28 01:00:07.133492 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-28 01:00:07.133497 | orchestrator | 2026-03-28 01:00:07.133503 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-28 01:00:07.133508 | orchestrator | Saturday 28 March 2026 00:55:08 +0000 (0:00:06.613) 0:07:27.795 ******** 2026-03-28 01:00:07.133513 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-28 01:00:07.133519 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-28 01:00:07.133524 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-28 01:00:07.133531 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-28 01:00:07.133537 | orchestrator | 2026-03-28 01:00:07.133543 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 01:00:07.133549 | orchestrator | Saturday 28 March 2026 00:55:13 +0000 (0:00:05.194) 0:07:32.990 ******** 2026-03-28 01:00:07.133555 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.133561 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.133567 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.133573 | orchestrator | 2026-03-28 01:00:07.133579 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 01:00:07.133585 | orchestrator | Saturday 28 March 2026 00:55:13 +0000 (0:00:00.711) 0:07:33.701 ******** 2026-03-28 01:00:07.133591 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.133606 | orchestrator | 2026-03-28 01:00:07.133612 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-28 01:00:07.133618 | orchestrator | Saturday 28 March 2026 00:55:14 +0000 (0:00:00.969) 0:07:34.670 ******** 2026-03-28 01:00:07.133624 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.133629 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.133635 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.133640 | orchestrator | 2026-03-28 01:00:07.133646 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-28 01:00:07.133652 | orchestrator | Saturday 28 March 2026 00:55:15 +0000 (0:00:00.549) 0:07:35.219 ******** 2026-03-28 01:00:07.133658 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.133663 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.133668 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.133674 | orchestrator | 2026-03-28 01:00:07.133679 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-28 01:00:07.133684 | orchestrator | Saturday 28 March 2026 00:55:16 +0000 (0:00:01.168) 0:07:36.388 ******** 2026-03-28 01:00:07.133690 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 01:00:07.133696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 01:00:07.133701 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 01:00:07.133707 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.133714 | orchestrator | 2026-03-28 01:00:07.133724 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-28 01:00:07.133731 | orchestrator | Saturday 28 March 2026 00:55:17 +0000 (0:00:00.650) 0:07:37.039 ******** 2026-03-28 01:00:07.133736 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.133742 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.133747 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.133753 | orchestrator | 2026-03-28 01:00:07.133758 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-28 01:00:07.133763 | orchestrator | 2026-03-28 01:00:07.133768 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 01:00:07.133774 | orchestrator | Saturday 28 March 2026 00:55:18 +0000 (0:00:00.932) 0:07:37.972 ******** 2026-03-28 01:00:07.133815 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.133823 | orchestrator | 2026-03-28 01:00:07.133829 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 01:00:07.133835 | orchestrator | Saturday 28 March 2026 00:55:18 +0000 (0:00:00.592) 0:07:38.565 ******** 2026-03-28 01:00:07.133841 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.133848 | orchestrator | 2026-03-28 01:00:07.133853 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 01:00:07.133858 | orchestrator | Saturday 28 March 2026 00:55:19 +0000 (0:00:00.843) 0:07:39.408 ******** 2026-03-28 01:00:07.133864 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.133871 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.133877 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.133883 | orchestrator | 2026-03-28 01:00:07.133890 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 01:00:07.133896 | orchestrator | Saturday 28 March 2026 00:55:19 +0000 (0:00:00.339) 0:07:39.748 ******** 2026-03-28 01:00:07.133902 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.133908 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.133914 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.133921 | orchestrator | 2026-03-28 01:00:07.133927 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 01:00:07.133933 | orchestrator | Saturday 28 March 2026 00:55:20 +0000 (0:00:00.710) 0:07:40.458 ******** 2026-03-28 01:00:07.133939 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.133946 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.133963 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.133970 | orchestrator | 2026-03-28 01:00:07.133976 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 01:00:07.133983 | orchestrator | Saturday 28 March 2026 00:55:21 +0000 (0:00:00.714) 0:07:41.173 ******** 2026-03-28 01:00:07.133988 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.133994 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.134000 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.134006 | orchestrator | 2026-03-28 01:00:07.134047 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 01:00:07.134055 | orchestrator | Saturday 28 March 2026 00:55:22 +0000 (0:00:01.126) 0:07:42.300 ******** 2026-03-28 01:00:07.134060 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.134067 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.134072 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.134079 | orchestrator | 2026-03-28 01:00:07.134084 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 01:00:07.134089 | orchestrator | Saturday 28 March 2026 00:55:22 +0000 (0:00:00.426) 0:07:42.726 ******** 2026-03-28 01:00:07.134095 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.134101 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.134106 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.134113 | orchestrator | 2026-03-28 01:00:07.134119 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 01:00:07.134125 | orchestrator | Saturday 28 March 2026 00:55:23 +0000 (0:00:00.343) 0:07:43.070 ******** 2026-03-28 01:00:07.134132 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.134138 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.134144 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.134150 | orchestrator | 2026-03-28 01:00:07.134155 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 01:00:07.134161 | orchestrator | Saturday 28 March 2026 00:55:23 +0000 (0:00:00.362) 0:07:43.432 ******** 2026-03-28 01:00:07.134167 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.134194 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.134201 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.134207 | orchestrator | 2026-03-28 01:00:07.134213 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 01:00:07.134219 | orchestrator | Saturday 28 March 2026 00:55:24 +0000 (0:00:01.054) 0:07:44.487 ******** 2026-03-28 01:00:07.134225 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.134231 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.134238 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.134244 | orchestrator | 2026-03-28 01:00:07.134250 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 01:00:07.134256 | orchestrator | Saturday 28 March 2026 00:55:25 +0000 (0:00:00.762) 0:07:45.250 ******** 2026-03-28 01:00:07.134262 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.134268 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.134274 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.134281 | orchestrator | 2026-03-28 01:00:07.134287 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 01:00:07.134293 | orchestrator | Saturday 28 March 2026 00:55:25 +0000 (0:00:00.396) 0:07:45.647 ******** 2026-03-28 01:00:07.134299 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.134304 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.134310 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.134316 | orchestrator | 2026-03-28 01:00:07.134323 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 01:00:07.134329 | orchestrator | Saturday 28 March 2026 00:55:26 +0000 (0:00:00.459) 0:07:46.107 ******** 2026-03-28 01:00:07.134334 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.134340 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.134353 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.134367 | orchestrator | 2026-03-28 01:00:07.134373 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 01:00:07.134378 | orchestrator | Saturday 28 March 2026 00:55:26 +0000 (0:00:00.670) 0:07:46.777 ******** 2026-03-28 01:00:07.134384 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.134390 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.134397 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.134403 | orchestrator | 2026-03-28 01:00:07.134409 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 01:00:07.134416 | orchestrator | Saturday 28 March 2026 00:55:27 +0000 (0:00:00.371) 0:07:47.148 ******** 2026-03-28 01:00:07.134422 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.134429 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.134443 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.134449 | orchestrator | 2026-03-28 01:00:07.134456 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 01:00:07.134462 | orchestrator | Saturday 28 March 2026 00:55:27 +0000 (0:00:00.349) 0:07:47.498 ******** 2026-03-28 01:00:07.134469 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.134475 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.134482 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.134489 | orchestrator | 2026-03-28 01:00:07.134496 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 01:00:07.134502 | orchestrator | Saturday 28 March 2026 00:55:28 +0000 (0:00:00.309) 0:07:47.807 ******** 2026-03-28 01:00:07.134508 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.134514 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.134520 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.134526 | orchestrator | 2026-03-28 01:00:07.134532 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 01:00:07.134538 | orchestrator | Saturday 28 March 2026 00:55:28 +0000 (0:00:00.614) 0:07:48.422 ******** 2026-03-28 01:00:07.134544 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.134550 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.134556 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.134562 | orchestrator | 2026-03-28 01:00:07.134568 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 01:00:07.134574 | orchestrator | Saturday 28 March 2026 00:55:28 +0000 (0:00:00.333) 0:07:48.756 ******** 2026-03-28 01:00:07.134580 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.134586 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.134593 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.134599 | orchestrator | 2026-03-28 01:00:07.134605 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 01:00:07.134611 | orchestrator | Saturday 28 March 2026 00:55:29 +0000 (0:00:00.341) 0:07:49.097 ******** 2026-03-28 01:00:07.134616 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.134622 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.134627 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.134633 | orchestrator | 2026-03-28 01:00:07.134639 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-28 01:00:07.134645 | orchestrator | Saturday 28 March 2026 00:55:30 +0000 (0:00:00.817) 0:07:49.914 ******** 2026-03-28 01:00:07.134652 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.134658 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.134663 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.134669 | orchestrator | 2026-03-28 01:00:07.134675 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-28 01:00:07.134681 | orchestrator | Saturday 28 March 2026 00:55:30 +0000 (0:00:00.355) 0:07:50.270 ******** 2026-03-28 01:00:07.134686 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 01:00:07.134693 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:00:07.134699 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:00:07.134713 | orchestrator | 2026-03-28 01:00:07.134720 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-28 01:00:07.134726 | orchestrator | Saturday 28 March 2026 00:55:31 +0000 (0:00:00.639) 0:07:50.909 ******** 2026-03-28 01:00:07.134732 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.134738 | orchestrator | 2026-03-28 01:00:07.134744 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-28 01:00:07.134750 | orchestrator | Saturday 28 March 2026 00:55:31 +0000 (0:00:00.522) 0:07:51.432 ******** 2026-03-28 01:00:07.134756 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.134763 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.134770 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.134776 | orchestrator | 2026-03-28 01:00:07.134782 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-28 01:00:07.134788 | orchestrator | Saturday 28 March 2026 00:55:32 +0000 (0:00:00.557) 0:07:51.989 ******** 2026-03-28 01:00:07.134795 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.134801 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.134807 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.134813 | orchestrator | 2026-03-28 01:00:07.134819 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-28 01:00:07.134825 | orchestrator | Saturday 28 March 2026 00:55:32 +0000 (0:00:00.340) 0:07:52.330 ******** 2026-03-28 01:00:07.134831 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.134838 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.134844 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.134851 | orchestrator | 2026-03-28 01:00:07.134857 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-28 01:00:07.134863 | orchestrator | Saturday 28 March 2026 00:55:33 +0000 (0:00:00.642) 0:07:52.973 ******** 2026-03-28 01:00:07.134869 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.134876 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.134882 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.134888 | orchestrator | 2026-03-28 01:00:07.134894 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-28 01:00:07.134907 | orchestrator | Saturday 28 March 2026 00:55:33 +0000 (0:00:00.365) 0:07:53.339 ******** 2026-03-28 01:00:07.134915 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 01:00:07.134922 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 01:00:07.134929 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 01:00:07.134936 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 01:00:07.134942 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 01:00:07.134958 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 01:00:07.134964 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 01:00:07.134970 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 01:00:07.134976 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 01:00:07.134982 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 01:00:07.134988 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 01:00:07.134994 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 01:00:07.135000 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 01:00:07.135007 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 01:00:07.135021 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 01:00:07.135027 | orchestrator | 2026-03-28 01:00:07.135033 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-28 01:00:07.135040 | orchestrator | Saturday 28 March 2026 00:55:36 +0000 (0:00:03.265) 0:07:56.604 ******** 2026-03-28 01:00:07.135047 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.135053 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.135060 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.135066 | orchestrator | 2026-03-28 01:00:07.135072 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-28 01:00:07.135079 | orchestrator | Saturday 28 March 2026 00:55:37 +0000 (0:00:00.354) 0:07:56.959 ******** 2026-03-28 01:00:07.135085 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.135091 | orchestrator | 2026-03-28 01:00:07.135098 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-28 01:00:07.135104 | orchestrator | Saturday 28 March 2026 00:55:37 +0000 (0:00:00.557) 0:07:57.516 ******** 2026-03-28 01:00:07.135111 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 01:00:07.135115 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 01:00:07.135119 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-28 01:00:07.135124 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-28 01:00:07.135128 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 01:00:07.135132 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-28 01:00:07.135135 | orchestrator | 2026-03-28 01:00:07.135139 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-28 01:00:07.135143 | orchestrator | Saturday 28 March 2026 00:55:39 +0000 (0:00:01.321) 0:07:58.838 ******** 2026-03-28 01:00:07.135147 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:07.135151 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 01:00:07.135155 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 01:00:07.135158 | orchestrator | 2026-03-28 01:00:07.135162 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-28 01:00:07.135166 | orchestrator | Saturday 28 March 2026 00:55:41 +0000 (0:00:02.126) 0:08:00.965 ******** 2026-03-28 01:00:07.135170 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 01:00:07.135204 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 01:00:07.135208 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.135212 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 01:00:07.135216 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 01:00:07.135220 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.135223 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 01:00:07.135227 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 01:00:07.135231 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.135235 | orchestrator | 2026-03-28 01:00:07.135239 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-28 01:00:07.135243 | orchestrator | Saturday 28 March 2026 00:55:42 +0000 (0:00:01.104) 0:08:02.069 ******** 2026-03-28 01:00:07.135246 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:00:07.135250 | orchestrator | 2026-03-28 01:00:07.135254 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-28 01:00:07.135258 | orchestrator | Saturday 28 March 2026 00:55:44 +0000 (0:00:02.162) 0:08:04.231 ******** 2026-03-28 01:00:07.135262 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.135271 | orchestrator | 2026-03-28 01:00:07.135275 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-28 01:00:07.135283 | orchestrator | Saturday 28 March 2026 00:55:45 +0000 (0:00:00.803) 0:08:05.035 ******** 2026-03-28 01:00:07.135288 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de32c164-f4a0-5092-ad33-650515756f9d', 'data_vg': 'ceph-de32c164-f4a0-5092-ad33-650515756f9d'}) 2026-03-28 01:00:07.135294 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8b5a6aab-ec84-598a-adc7-d040a5844549', 'data_vg': 'ceph-8b5a6aab-ec84-598a-adc7-d040a5844549'}) 2026-03-28 01:00:07.135304 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e282229f-a8c2-5daa-9c69-6eb93429113b', 'data_vg': 'ceph-e282229f-a8c2-5daa-9c69-6eb93429113b'}) 2026-03-28 01:00:07.135308 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe', 'data_vg': 'ceph-02fe8db3-ee90-5f59-9f4e-fa58d6febfbe'}) 2026-03-28 01:00:07.135311 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-65811f0f-7bf7-557a-9618-106707fc2899', 'data_vg': 'ceph-65811f0f-7bf7-557a-9618-106707fc2899'}) 2026-03-28 01:00:07.135315 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1d415d19-3246-5675-b441-c36cba308c79', 'data_vg': 'ceph-1d415d19-3246-5675-b441-c36cba308c79'}) 2026-03-28 01:00:07.135319 | orchestrator | 2026-03-28 01:00:07.135323 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-28 01:00:07.135327 | orchestrator | Saturday 28 March 2026 00:56:33 +0000 (0:00:48.503) 0:08:53.538 ******** 2026-03-28 01:00:07.135331 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.135337 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.135343 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.135349 | orchestrator | 2026-03-28 01:00:07.135363 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-28 01:00:07.135369 | orchestrator | Saturday 28 March 2026 00:56:34 +0000 (0:00:00.370) 0:08:53.908 ******** 2026-03-28 01:00:07.135375 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.135381 | orchestrator | 2026-03-28 01:00:07.135387 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-28 01:00:07.135393 | orchestrator | Saturday 28 March 2026 00:56:34 +0000 (0:00:00.826) 0:08:54.735 ******** 2026-03-28 01:00:07.135399 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.135405 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.135412 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.135419 | orchestrator | 2026-03-28 01:00:07.135423 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-28 01:00:07.135427 | orchestrator | Saturday 28 March 2026 00:56:35 +0000 (0:00:00.669) 0:08:55.405 ******** 2026-03-28 01:00:07.135430 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.135434 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.135438 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.135442 | orchestrator | 2026-03-28 01:00:07.135445 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-28 01:00:07.135449 | orchestrator | Saturday 28 March 2026 00:56:38 +0000 (0:00:02.764) 0:08:58.170 ******** 2026-03-28 01:00:07.135453 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.135457 | orchestrator | 2026-03-28 01:00:07.135460 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-28 01:00:07.135464 | orchestrator | Saturday 28 March 2026 00:56:39 +0000 (0:00:00.966) 0:08:59.136 ******** 2026-03-28 01:00:07.135468 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.135472 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.135475 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.135479 | orchestrator | 2026-03-28 01:00:07.135483 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-28 01:00:07.135487 | orchestrator | Saturday 28 March 2026 00:56:40 +0000 (0:00:01.215) 0:09:00.352 ******** 2026-03-28 01:00:07.135496 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.135500 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.135503 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.135507 | orchestrator | 2026-03-28 01:00:07.135511 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-28 01:00:07.135515 | orchestrator | Saturday 28 March 2026 00:56:41 +0000 (0:00:01.199) 0:09:01.551 ******** 2026-03-28 01:00:07.135519 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.135522 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.135526 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.135530 | orchestrator | 2026-03-28 01:00:07.135534 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-28 01:00:07.135538 | orchestrator | Saturday 28 March 2026 00:56:43 +0000 (0:00:01.833) 0:09:03.384 ******** 2026-03-28 01:00:07.135541 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.135545 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.135549 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.135553 | orchestrator | 2026-03-28 01:00:07.135556 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-28 01:00:07.135560 | orchestrator | Saturday 28 March 2026 00:56:43 +0000 (0:00:00.294) 0:09:03.679 ******** 2026-03-28 01:00:07.135564 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.135568 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.135587 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.135591 | orchestrator | 2026-03-28 01:00:07.135595 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-28 01:00:07.135599 | orchestrator | Saturday 28 March 2026 00:56:44 +0000 (0:00:00.562) 0:09:04.242 ******** 2026-03-28 01:00:07.135602 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-28 01:00:07.135606 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-28 01:00:07.135610 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-28 01:00:07.135617 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-03-28 01:00:07.135621 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 01:00:07.135625 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-28 01:00:07.135629 | orchestrator | 2026-03-28 01:00:07.135632 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-28 01:00:07.135636 | orchestrator | Saturday 28 March 2026 00:56:45 +0000 (0:00:01.038) 0:09:05.280 ******** 2026-03-28 01:00:07.135640 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-28 01:00:07.135644 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-28 01:00:07.135648 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-28 01:00:07.135653 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-28 01:00:07.135660 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-28 01:00:07.135676 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-28 01:00:07.135684 | orchestrator | 2026-03-28 01:00:07.135690 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-28 01:00:07.135697 | orchestrator | Saturday 28 March 2026 00:56:47 +0000 (0:00:02.069) 0:09:07.349 ******** 2026-03-28 01:00:07.135703 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-28 01:00:07.135709 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-28 01:00:07.135715 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-28 01:00:07.135722 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-28 01:00:07.135729 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-28 01:00:07.135735 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-28 01:00:07.135747 | orchestrator | 2026-03-28 01:00:07.135754 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-28 01:00:07.135761 | orchestrator | Saturday 28 March 2026 00:56:51 +0000 (0:00:03.746) 0:09:11.095 ******** 2026-03-28 01:00:07.135768 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.135775 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.135784 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:00:07.135788 | orchestrator | 2026-03-28 01:00:07.135792 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-28 01:00:07.135796 | orchestrator | Saturday 28 March 2026 00:56:54 +0000 (0:00:03.148) 0:09:14.244 ******** 2026-03-28 01:00:07.135800 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.135803 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.135807 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-28 01:00:07.135811 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:00:07.135815 | orchestrator | 2026-03-28 01:00:07.135819 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-28 01:00:07.135822 | orchestrator | Saturday 28 March 2026 00:57:06 +0000 (0:00:12.455) 0:09:26.699 ******** 2026-03-28 01:00:07.135826 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.135830 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.135834 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.135838 | orchestrator | 2026-03-28 01:00:07.135841 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 01:00:07.135845 | orchestrator | Saturday 28 March 2026 00:57:08 +0000 (0:00:01.125) 0:09:27.825 ******** 2026-03-28 01:00:07.135849 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.135853 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.135857 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.135861 | orchestrator | 2026-03-28 01:00:07.135864 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 01:00:07.135868 | orchestrator | Saturday 28 March 2026 00:57:08 +0000 (0:00:00.407) 0:09:28.232 ******** 2026-03-28 01:00:07.135872 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.135876 | orchestrator | 2026-03-28 01:00:07.135880 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-28 01:00:07.135884 | orchestrator | Saturday 28 March 2026 00:57:08 +0000 (0:00:00.536) 0:09:28.769 ******** 2026-03-28 01:00:07.135888 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.135892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.135895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.135899 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.135903 | orchestrator | 2026-03-28 01:00:07.135907 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-28 01:00:07.135911 | orchestrator | Saturday 28 March 2026 00:57:10 +0000 (0:00:01.085) 0:09:29.855 ******** 2026-03-28 01:00:07.135915 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.135918 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.135922 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.135926 | orchestrator | 2026-03-28 01:00:07.135930 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-28 01:00:07.135934 | orchestrator | Saturday 28 March 2026 00:57:10 +0000 (0:00:00.458) 0:09:30.314 ******** 2026-03-28 01:00:07.135938 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.135941 | orchestrator | 2026-03-28 01:00:07.135945 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-28 01:00:07.135949 | orchestrator | Saturday 28 March 2026 00:57:10 +0000 (0:00:00.250) 0:09:30.565 ******** 2026-03-28 01:00:07.135953 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.135957 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.135960 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.135964 | orchestrator | 2026-03-28 01:00:07.135968 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-28 01:00:07.135972 | orchestrator | Saturday 28 March 2026 00:57:11 +0000 (0:00:00.334) 0:09:30.899 ******** 2026-03-28 01:00:07.135979 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.135983 | orchestrator | 2026-03-28 01:00:07.135987 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-28 01:00:07.135994 | orchestrator | Saturday 28 March 2026 00:57:11 +0000 (0:00:00.235) 0:09:31.135 ******** 2026-03-28 01:00:07.135998 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136002 | orchestrator | 2026-03-28 01:00:07.136006 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-28 01:00:07.136010 | orchestrator | Saturday 28 March 2026 00:57:11 +0000 (0:00:00.226) 0:09:31.362 ******** 2026-03-28 01:00:07.136013 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136017 | orchestrator | 2026-03-28 01:00:07.136021 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-28 01:00:07.136025 | orchestrator | Saturday 28 March 2026 00:57:11 +0000 (0:00:00.119) 0:09:31.481 ******** 2026-03-28 01:00:07.136029 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136033 | orchestrator | 2026-03-28 01:00:07.136040 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-28 01:00:07.136044 | orchestrator | Saturday 28 March 2026 00:57:11 +0000 (0:00:00.200) 0:09:31.682 ******** 2026-03-28 01:00:07.136048 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136052 | orchestrator | 2026-03-28 01:00:07.136056 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-28 01:00:07.136059 | orchestrator | Saturday 28 March 2026 00:57:12 +0000 (0:00:00.880) 0:09:32.562 ******** 2026-03-28 01:00:07.136063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.136067 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.136071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.136075 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136079 | orchestrator | 2026-03-28 01:00:07.136082 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-28 01:00:07.136086 | orchestrator | Saturday 28 March 2026 00:57:13 +0000 (0:00:00.506) 0:09:33.068 ******** 2026-03-28 01:00:07.136090 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136094 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.136098 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.136102 | orchestrator | 2026-03-28 01:00:07.136105 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-28 01:00:07.136109 | orchestrator | Saturday 28 March 2026 00:57:13 +0000 (0:00:00.431) 0:09:33.499 ******** 2026-03-28 01:00:07.136113 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136117 | orchestrator | 2026-03-28 01:00:07.136121 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-28 01:00:07.136125 | orchestrator | Saturday 28 March 2026 00:57:13 +0000 (0:00:00.231) 0:09:33.730 ******** 2026-03-28 01:00:07.136128 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136132 | orchestrator | 2026-03-28 01:00:07.136136 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-28 01:00:07.136140 | orchestrator | 2026-03-28 01:00:07.136144 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 01:00:07.136148 | orchestrator | Saturday 28 March 2026 00:57:14 +0000 (0:00:01.025) 0:09:34.756 ******** 2026-03-28 01:00:07.136152 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.136157 | orchestrator | 2026-03-28 01:00:07.136161 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 01:00:07.136164 | orchestrator | Saturday 28 March 2026 00:57:16 +0000 (0:00:01.397) 0:09:36.154 ******** 2026-03-28 01:00:07.136168 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.136217 | orchestrator | 2026-03-28 01:00:07.136223 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 01:00:07.136226 | orchestrator | Saturday 28 March 2026 00:57:17 +0000 (0:00:01.050) 0:09:37.204 ******** 2026-03-28 01:00:07.136230 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136234 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.136238 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.136242 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.136245 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.136249 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.136253 | orchestrator | 2026-03-28 01:00:07.136257 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 01:00:07.136261 | orchestrator | Saturday 28 March 2026 00:57:18 +0000 (0:00:01.351) 0:09:38.556 ******** 2026-03-28 01:00:07.136264 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.136268 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.136272 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.136276 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.136280 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.136283 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.136287 | orchestrator | 2026-03-28 01:00:07.136291 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 01:00:07.136295 | orchestrator | Saturday 28 March 2026 00:57:19 +0000 (0:00:00.744) 0:09:39.301 ******** 2026-03-28 01:00:07.136299 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.136302 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.136306 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.136310 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.136314 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.136318 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.136321 | orchestrator | 2026-03-28 01:00:07.136325 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 01:00:07.136329 | orchestrator | Saturday 28 March 2026 00:57:20 +0000 (0:00:01.151) 0:09:40.452 ******** 2026-03-28 01:00:07.136333 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.136336 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.136340 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.136344 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.136348 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.136351 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.136355 | orchestrator | 2026-03-28 01:00:07.136359 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 01:00:07.136366 | orchestrator | Saturday 28 March 2026 00:57:21 +0000 (0:00:00.746) 0:09:41.199 ******** 2026-03-28 01:00:07.136370 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136373 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.136377 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.136381 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.136385 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.136389 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.136392 | orchestrator | 2026-03-28 01:00:07.136396 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 01:00:07.136400 | orchestrator | Saturday 28 March 2026 00:57:22 +0000 (0:00:01.304) 0:09:42.503 ******** 2026-03-28 01:00:07.136404 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136407 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.136415 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.136419 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.136422 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.136426 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.136430 | orchestrator | 2026-03-28 01:00:07.136434 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 01:00:07.136438 | orchestrator | Saturday 28 March 2026 00:57:23 +0000 (0:00:00.643) 0:09:43.146 ******** 2026-03-28 01:00:07.136447 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136451 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.136455 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.136459 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.136463 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.136466 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.136470 | orchestrator | 2026-03-28 01:00:07.136474 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 01:00:07.136478 | orchestrator | Saturday 28 March 2026 00:57:24 +0000 (0:00:00.956) 0:09:44.103 ******** 2026-03-28 01:00:07.136481 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.136485 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.136489 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.136493 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.136497 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.136500 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.136504 | orchestrator | 2026-03-28 01:00:07.136508 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 01:00:07.136512 | orchestrator | Saturday 28 March 2026 00:57:25 +0000 (0:00:01.289) 0:09:45.393 ******** 2026-03-28 01:00:07.136515 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.136519 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.136523 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.136527 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.136530 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.136534 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.136538 | orchestrator | 2026-03-28 01:00:07.136542 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 01:00:07.136545 | orchestrator | Saturday 28 March 2026 00:57:27 +0000 (0:00:01.464) 0:09:46.857 ******** 2026-03-28 01:00:07.136549 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136553 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.136557 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.136561 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.136564 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.136568 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.136572 | orchestrator | 2026-03-28 01:00:07.136575 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 01:00:07.136579 | orchestrator | Saturday 28 March 2026 00:57:27 +0000 (0:00:00.598) 0:09:47.456 ******** 2026-03-28 01:00:07.136583 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136587 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.136591 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.136594 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.136598 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.136602 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.136606 | orchestrator | 2026-03-28 01:00:07.136609 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 01:00:07.136613 | orchestrator | Saturday 28 March 2026 00:57:28 +0000 (0:00:00.919) 0:09:48.376 ******** 2026-03-28 01:00:07.136617 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.136621 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.136625 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.136628 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.136632 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.136636 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.136640 | orchestrator | 2026-03-28 01:00:07.136643 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 01:00:07.136647 | orchestrator | Saturday 28 March 2026 00:57:29 +0000 (0:00:00.645) 0:09:49.021 ******** 2026-03-28 01:00:07.136651 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.136655 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.136659 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.136662 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.136666 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.136673 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.136677 | orchestrator | 2026-03-28 01:00:07.136681 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 01:00:07.136685 | orchestrator | Saturday 28 March 2026 00:57:30 +0000 (0:00:00.886) 0:09:49.908 ******** 2026-03-28 01:00:07.136688 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.136692 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.136696 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.136700 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.136704 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.136707 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.136711 | orchestrator | 2026-03-28 01:00:07.136715 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 01:00:07.136719 | orchestrator | Saturday 28 March 2026 00:57:30 +0000 (0:00:00.658) 0:09:50.567 ******** 2026-03-28 01:00:07.136722 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136726 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.136730 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.136734 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.136737 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.136741 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.136745 | orchestrator | 2026-03-28 01:00:07.136752 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 01:00:07.136756 | orchestrator | Saturday 28 March 2026 00:57:31 +0000 (0:00:00.895) 0:09:51.462 ******** 2026-03-28 01:00:07.136759 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136763 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.136767 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.136773 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:07.136779 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:07.136785 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:07.136792 | orchestrator | 2026-03-28 01:00:07.136798 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 01:00:07.136804 | orchestrator | Saturday 28 March 2026 00:57:32 +0000 (0:00:00.656) 0:09:52.118 ******** 2026-03-28 01:00:07.136814 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.136819 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.136825 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.136831 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.136837 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.136843 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.136849 | orchestrator | 2026-03-28 01:00:07.136854 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 01:00:07.136860 | orchestrator | Saturday 28 March 2026 00:57:33 +0000 (0:00:00.936) 0:09:53.055 ******** 2026-03-28 01:00:07.136866 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.136872 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.136877 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.136883 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.136888 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.136894 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.136901 | orchestrator | 2026-03-28 01:00:07.136906 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 01:00:07.136913 | orchestrator | Saturday 28 March 2026 00:57:33 +0000 (0:00:00.717) 0:09:53.772 ******** 2026-03-28 01:00:07.136918 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.136924 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.136929 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.136935 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.136940 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.136946 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.136951 | orchestrator | 2026-03-28 01:00:07.136956 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-28 01:00:07.136962 | orchestrator | Saturday 28 March 2026 00:57:35 +0000 (0:00:01.457) 0:09:55.230 ******** 2026-03-28 01:00:07.136972 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:00:07.136978 | orchestrator | 2026-03-28 01:00:07.136983 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-28 01:00:07.136989 | orchestrator | Saturday 28 March 2026 00:57:39 +0000 (0:00:03.834) 0:09:59.064 ******** 2026-03-28 01:00:07.136996 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:00:07.137002 | orchestrator | 2026-03-28 01:00:07.137008 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-28 01:00:07.137014 | orchestrator | Saturday 28 March 2026 00:57:41 +0000 (0:00:02.134) 0:10:01.199 ******** 2026-03-28 01:00:07.137020 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.137026 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.137032 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.137038 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.137044 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.137050 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.137054 | orchestrator | 2026-03-28 01:00:07.137058 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-28 01:00:07.137062 | orchestrator | Saturday 28 March 2026 00:57:43 +0000 (0:00:01.853) 0:10:03.053 ******** 2026-03-28 01:00:07.137066 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.137069 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.137073 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.137078 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.137083 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.137089 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.137095 | orchestrator | 2026-03-28 01:00:07.137102 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-28 01:00:07.137107 | orchestrator | Saturday 28 March 2026 00:57:44 +0000 (0:00:01.122) 0:10:04.176 ******** 2026-03-28 01:00:07.137113 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.137120 | orchestrator | 2026-03-28 01:00:07.137126 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-28 01:00:07.137132 | orchestrator | Saturday 28 March 2026 00:57:45 +0000 (0:00:01.433) 0:10:05.609 ******** 2026-03-28 01:00:07.137138 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.137144 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.137150 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.137155 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.137161 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.137168 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.137187 | orchestrator | 2026-03-28 01:00:07.137194 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-28 01:00:07.137200 | orchestrator | Saturday 28 March 2026 00:57:47 +0000 (0:00:01.862) 0:10:07.472 ******** 2026-03-28 01:00:07.137206 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.137212 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.137218 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.137224 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.137229 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.137236 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.137241 | orchestrator | 2026-03-28 01:00:07.137244 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-28 01:00:07.137248 | orchestrator | Saturday 28 March 2026 00:57:51 +0000 (0:00:03.975) 0:10:11.447 ******** 2026-03-28 01:00:07.137253 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:07.137257 | orchestrator | 2026-03-28 01:00:07.137265 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-28 01:00:07.137273 | orchestrator | Saturday 28 March 2026 00:57:53 +0000 (0:00:01.485) 0:10:12.933 ******** 2026-03-28 01:00:07.137277 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.137281 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.137285 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.137289 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.137292 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.137296 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.137303 | orchestrator | 2026-03-28 01:00:07.137309 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-28 01:00:07.137315 | orchestrator | Saturday 28 March 2026 00:57:54 +0000 (0:00:00.960) 0:10:13.894 ******** 2026-03-28 01:00:07.137321 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.137332 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.137337 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.137343 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:07.137349 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:07.137355 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:07.137361 | orchestrator | 2026-03-28 01:00:07.137367 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-28 01:00:07.137373 | orchestrator | Saturday 28 March 2026 00:57:56 +0000 (0:00:02.336) 0:10:16.231 ******** 2026-03-28 01:00:07.137378 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.137384 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.137390 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.137396 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:07.137402 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:07.137407 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:07.137412 | orchestrator | 2026-03-28 01:00:07.137417 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-28 01:00:07.137423 | orchestrator | 2026-03-28 01:00:07.137428 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 01:00:07.137435 | orchestrator | Saturday 28 March 2026 00:57:57 +0000 (0:00:01.218) 0:10:17.449 ******** 2026-03-28 01:00:07.137441 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.137447 | orchestrator | 2026-03-28 01:00:07.137453 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 01:00:07.137459 | orchestrator | Saturday 28 March 2026 00:57:58 +0000 (0:00:00.537) 0:10:17.986 ******** 2026-03-28 01:00:07.137465 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.137471 | orchestrator | 2026-03-28 01:00:07.137477 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 01:00:07.137483 | orchestrator | Saturday 28 March 2026 00:57:59 +0000 (0:00:00.912) 0:10:18.899 ******** 2026-03-28 01:00:07.137488 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.137494 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.137499 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.137505 | orchestrator | 2026-03-28 01:00:07.137512 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 01:00:07.137518 | orchestrator | Saturday 28 March 2026 00:57:59 +0000 (0:00:00.365) 0:10:19.265 ******** 2026-03-28 01:00:07.137524 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.137531 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.137537 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.137542 | orchestrator | 2026-03-28 01:00:07.137548 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 01:00:07.137553 | orchestrator | Saturday 28 March 2026 00:58:00 +0000 (0:00:00.708) 0:10:19.973 ******** 2026-03-28 01:00:07.137559 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.137564 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.137569 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.137575 | orchestrator | 2026-03-28 01:00:07.137580 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 01:00:07.137598 | orchestrator | Saturday 28 March 2026 00:58:01 +0000 (0:00:01.135) 0:10:21.108 ******** 2026-03-28 01:00:07.137604 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.137609 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.137615 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.137621 | orchestrator | 2026-03-28 01:00:07.137627 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 01:00:07.137632 | orchestrator | Saturday 28 March 2026 00:58:02 +0000 (0:00:00.816) 0:10:21.925 ******** 2026-03-28 01:00:07.137637 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.137643 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.137649 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.137654 | orchestrator | 2026-03-28 01:00:07.137661 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 01:00:07.137667 | orchestrator | Saturday 28 March 2026 00:58:02 +0000 (0:00:00.343) 0:10:22.269 ******** 2026-03-28 01:00:07.137673 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.137678 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.137684 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.137689 | orchestrator | 2026-03-28 01:00:07.137695 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 01:00:07.137701 | orchestrator | Saturday 28 March 2026 00:58:02 +0000 (0:00:00.306) 0:10:22.575 ******** 2026-03-28 01:00:07.137707 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.137712 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.137718 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.137723 | orchestrator | 2026-03-28 01:00:07.137729 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 01:00:07.137734 | orchestrator | Saturday 28 March 2026 00:58:03 +0000 (0:00:00.677) 0:10:23.253 ******** 2026-03-28 01:00:07.137741 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.137747 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.137753 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.137759 | orchestrator | 2026-03-28 01:00:07.137764 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 01:00:07.137771 | orchestrator | Saturday 28 March 2026 00:58:04 +0000 (0:00:01.042) 0:10:24.295 ******** 2026-03-28 01:00:07.137777 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.137788 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.137796 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.137802 | orchestrator | 2026-03-28 01:00:07.137808 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 01:00:07.137814 | orchestrator | Saturday 28 March 2026 00:58:05 +0000 (0:00:00.904) 0:10:25.200 ******** 2026-03-28 01:00:07.137821 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.137827 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.137833 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.137839 | orchestrator | 2026-03-28 01:00:07.137845 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 01:00:07.137852 | orchestrator | Saturday 28 March 2026 00:58:05 +0000 (0:00:00.357) 0:10:25.558 ******** 2026-03-28 01:00:07.137858 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.137872 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.137878 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.137884 | orchestrator | 2026-03-28 01:00:07.137890 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 01:00:07.137896 | orchestrator | Saturday 28 March 2026 00:58:06 +0000 (0:00:00.707) 0:10:26.265 ******** 2026-03-28 01:00:07.137903 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.137909 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.137915 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.137921 | orchestrator | 2026-03-28 01:00:07.137927 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 01:00:07.137933 | orchestrator | Saturday 28 March 2026 00:58:06 +0000 (0:00:00.383) 0:10:26.648 ******** 2026-03-28 01:00:07.137945 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.137952 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.137958 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.137965 | orchestrator | 2026-03-28 01:00:07.137971 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 01:00:07.137977 | orchestrator | Saturday 28 March 2026 00:58:07 +0000 (0:00:00.345) 0:10:26.993 ******** 2026-03-28 01:00:07.137983 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.137990 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.137996 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.138003 | orchestrator | 2026-03-28 01:00:07.138009 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 01:00:07.138052 | orchestrator | Saturday 28 March 2026 00:58:07 +0000 (0:00:00.324) 0:10:27.318 ******** 2026-03-28 01:00:07.138058 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.138064 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.138070 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.138076 | orchestrator | 2026-03-28 01:00:07.138082 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 01:00:07.138089 | orchestrator | Saturday 28 March 2026 00:58:08 +0000 (0:00:00.613) 0:10:27.932 ******** 2026-03-28 01:00:07.138095 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.138101 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.138108 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.138114 | orchestrator | 2026-03-28 01:00:07.138120 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 01:00:07.138126 | orchestrator | Saturday 28 March 2026 00:58:08 +0000 (0:00:00.332) 0:10:28.265 ******** 2026-03-28 01:00:07.138132 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.138138 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.138145 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.138152 | orchestrator | 2026-03-28 01:00:07.138158 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 01:00:07.138164 | orchestrator | Saturday 28 March 2026 00:58:08 +0000 (0:00:00.337) 0:10:28.602 ******** 2026-03-28 01:00:07.138215 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.138224 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.138230 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.138237 | orchestrator | 2026-03-28 01:00:07.138243 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 01:00:07.138250 | orchestrator | Saturday 28 March 2026 00:58:09 +0000 (0:00:00.450) 0:10:29.053 ******** 2026-03-28 01:00:07.138256 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.138262 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.138268 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.138274 | orchestrator | 2026-03-28 01:00:07.138281 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-28 01:00:07.138287 | orchestrator | Saturday 28 March 2026 00:58:10 +0000 (0:00:00.978) 0:10:30.031 ******** 2026-03-28 01:00:07.138294 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.138299 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.138306 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-28 01:00:07.138312 | orchestrator | 2026-03-28 01:00:07.138319 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-28 01:00:07.138325 | orchestrator | Saturday 28 March 2026 00:58:10 +0000 (0:00:00.464) 0:10:30.496 ******** 2026-03-28 01:00:07.138331 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:00:07.138338 | orchestrator | 2026-03-28 01:00:07.138344 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-28 01:00:07.138350 | orchestrator | Saturday 28 March 2026 00:58:12 +0000 (0:00:02.291) 0:10:32.787 ******** 2026-03-28 01:00:07.138358 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-28 01:00:07.138372 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.138378 | orchestrator | 2026-03-28 01:00:07.138384 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-28 01:00:07.138391 | orchestrator | Saturday 28 March 2026 00:58:13 +0000 (0:00:00.206) 0:10:32.994 ******** 2026-03-28 01:00:07.138403 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 01:00:07.138417 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 01:00:07.138423 | orchestrator | 2026-03-28 01:00:07.138430 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-28 01:00:07.138436 | orchestrator | Saturday 28 March 2026 00:58:22 +0000 (0:00:09.120) 0:10:42.114 ******** 2026-03-28 01:00:07.138448 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:00:07.138455 | orchestrator | 2026-03-28 01:00:07.138461 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-28 01:00:07.138467 | orchestrator | Saturday 28 March 2026 00:58:26 +0000 (0:00:04.193) 0:10:46.307 ******** 2026-03-28 01:00:07.138474 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4, testbed-node-5, testbed-node-3 2026-03-28 01:00:07.138481 | orchestrator | 2026-03-28 01:00:07.138487 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-28 01:00:07.138493 | orchestrator | Saturday 28 March 2026 00:58:27 +0000 (0:00:01.057) 0:10:47.365 ******** 2026-03-28 01:00:07.138500 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 01:00:07.138506 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 01:00:07.138512 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 01:00:07.138518 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-28 01:00:07.138524 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-28 01:00:07.138530 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-28 01:00:07.138535 | orchestrator | 2026-03-28 01:00:07.138539 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-28 01:00:07.138543 | orchestrator | Saturday 28 March 2026 00:58:28 +0000 (0:00:01.266) 0:10:48.632 ******** 2026-03-28 01:00:07.138546 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:07.138550 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 01:00:07.138554 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 01:00:07.138558 | orchestrator | 2026-03-28 01:00:07.138562 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-28 01:00:07.138566 | orchestrator | Saturday 28 March 2026 00:58:31 +0000 (0:00:02.556) 0:10:51.189 ******** 2026-03-28 01:00:07.138570 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 01:00:07.138574 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 01:00:07.138578 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.138581 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 01:00:07.138585 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 01:00:07.138589 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.138593 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 01:00:07.138603 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 01:00:07.138607 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.138611 | orchestrator | 2026-03-28 01:00:07.138614 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-28 01:00:07.138618 | orchestrator | Saturday 28 March 2026 00:58:33 +0000 (0:00:01.863) 0:10:53.053 ******** 2026-03-28 01:00:07.138622 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.138626 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.138630 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.138633 | orchestrator | 2026-03-28 01:00:07.138637 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-28 01:00:07.138641 | orchestrator | Saturday 28 March 2026 00:58:36 +0000 (0:00:02.778) 0:10:55.831 ******** 2026-03-28 01:00:07.138645 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.138649 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.138653 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.138656 | orchestrator | 2026-03-28 01:00:07.138660 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-28 01:00:07.138664 | orchestrator | Saturday 28 March 2026 00:58:36 +0000 (0:00:00.456) 0:10:56.288 ******** 2026-03-28 01:00:07.138668 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.138672 | orchestrator | 2026-03-28 01:00:07.138676 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-28 01:00:07.138679 | orchestrator | Saturday 28 March 2026 00:58:37 +0000 (0:00:00.904) 0:10:57.192 ******** 2026-03-28 01:00:07.138683 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.138687 | orchestrator | 2026-03-28 01:00:07.138691 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-28 01:00:07.138694 | orchestrator | Saturday 28 March 2026 00:58:38 +0000 (0:00:00.623) 0:10:57.816 ******** 2026-03-28 01:00:07.138698 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.138702 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.138706 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.138710 | orchestrator | 2026-03-28 01:00:07.138713 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-28 01:00:07.138717 | orchestrator | Saturday 28 March 2026 00:58:39 +0000 (0:00:01.156) 0:10:58.972 ******** 2026-03-28 01:00:07.138721 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.138725 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.138729 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.138733 | orchestrator | 2026-03-28 01:00:07.138737 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-28 01:00:07.138741 | orchestrator | Saturday 28 March 2026 00:58:40 +0000 (0:00:01.404) 0:11:00.376 ******** 2026-03-28 01:00:07.138745 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.138749 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.138752 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.138756 | orchestrator | 2026-03-28 01:00:07.138760 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-28 01:00:07.138764 | orchestrator | Saturday 28 March 2026 00:58:42 +0000 (0:00:02.057) 0:11:02.434 ******** 2026-03-28 01:00:07.138768 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.138775 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.138779 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.138782 | orchestrator | 2026-03-28 01:00:07.138786 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-28 01:00:07.138790 | orchestrator | Saturday 28 March 2026 00:58:44 +0000 (0:00:01.966) 0:11:04.400 ******** 2026-03-28 01:00:07.138794 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.138798 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.138801 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.138805 | orchestrator | 2026-03-28 01:00:07.138813 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 01:00:07.138817 | orchestrator | Saturday 28 March 2026 00:58:46 +0000 (0:00:01.822) 0:11:06.223 ******** 2026-03-28 01:00:07.138821 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.138824 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.138828 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.138832 | orchestrator | 2026-03-28 01:00:07.138836 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 01:00:07.138840 | orchestrator | Saturday 28 March 2026 00:58:47 +0000 (0:00:00.797) 0:11:07.020 ******** 2026-03-28 01:00:07.138843 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.138847 | orchestrator | 2026-03-28 01:00:07.138851 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-28 01:00:07.138855 | orchestrator | Saturday 28 March 2026 00:58:48 +0000 (0:00:00.885) 0:11:07.906 ******** 2026-03-28 01:00:07.138859 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.138862 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.138866 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.138870 | orchestrator | 2026-03-28 01:00:07.138874 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-28 01:00:07.138878 | orchestrator | Saturday 28 March 2026 00:58:48 +0000 (0:00:00.384) 0:11:08.291 ******** 2026-03-28 01:00:07.138881 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.138885 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.138889 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.138893 | orchestrator | 2026-03-28 01:00:07.138896 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-28 01:00:07.138900 | orchestrator | Saturday 28 March 2026 00:58:49 +0000 (0:00:01.168) 0:11:09.460 ******** 2026-03-28 01:00:07.138904 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.138908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.138912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.138915 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.138919 | orchestrator | 2026-03-28 01:00:07.138923 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-28 01:00:07.138927 | orchestrator | Saturday 28 March 2026 00:58:50 +0000 (0:00:01.000) 0:11:10.460 ******** 2026-03-28 01:00:07.138931 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.138934 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.138938 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.138942 | orchestrator | 2026-03-28 01:00:07.138946 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-28 01:00:07.138950 | orchestrator | 2026-03-28 01:00:07.138953 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 01:00:07.138957 | orchestrator | Saturday 28 March 2026 00:58:51 +0000 (0:00:00.931) 0:11:11.392 ******** 2026-03-28 01:00:07.138961 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.138965 | orchestrator | 2026-03-28 01:00:07.138969 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 01:00:07.138973 | orchestrator | Saturday 28 March 2026 00:58:52 +0000 (0:00:00.541) 0:11:11.934 ******** 2026-03-28 01:00:07.139004 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.139008 | orchestrator | 2026-03-28 01:00:07.139012 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 01:00:07.139016 | orchestrator | Saturday 28 March 2026 00:58:52 +0000 (0:00:00.820) 0:11:12.754 ******** 2026-03-28 01:00:07.139020 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139023 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.139027 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.139034 | orchestrator | 2026-03-28 01:00:07.139038 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 01:00:07.139042 | orchestrator | Saturday 28 March 2026 00:58:53 +0000 (0:00:00.344) 0:11:13.099 ******** 2026-03-28 01:00:07.139046 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.139050 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.139053 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.139057 | orchestrator | 2026-03-28 01:00:07.139061 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 01:00:07.139065 | orchestrator | Saturday 28 March 2026 00:58:54 +0000 (0:00:00.721) 0:11:13.820 ******** 2026-03-28 01:00:07.139069 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.139072 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.139076 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.139080 | orchestrator | 2026-03-28 01:00:07.139090 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 01:00:07.139094 | orchestrator | Saturday 28 March 2026 00:58:55 +0000 (0:00:01.006) 0:11:14.827 ******** 2026-03-28 01:00:07.139097 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.139101 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.139105 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.139109 | orchestrator | 2026-03-28 01:00:07.139112 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 01:00:07.139116 | orchestrator | Saturday 28 March 2026 00:58:55 +0000 (0:00:00.738) 0:11:15.566 ******** 2026-03-28 01:00:07.139120 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139124 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.139127 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.139131 | orchestrator | 2026-03-28 01:00:07.139138 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 01:00:07.139143 | orchestrator | Saturday 28 March 2026 00:58:56 +0000 (0:00:00.357) 0:11:15.924 ******** 2026-03-28 01:00:07.139146 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139150 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.139154 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.139158 | orchestrator | 2026-03-28 01:00:07.139162 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 01:00:07.139165 | orchestrator | Saturday 28 March 2026 00:58:56 +0000 (0:00:00.356) 0:11:16.280 ******** 2026-03-28 01:00:07.139169 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139188 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.139194 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.139198 | orchestrator | 2026-03-28 01:00:07.139201 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 01:00:07.139205 | orchestrator | Saturday 28 March 2026 00:58:57 +0000 (0:00:00.637) 0:11:16.918 ******** 2026-03-28 01:00:07.139209 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.139213 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.139217 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.139220 | orchestrator | 2026-03-28 01:00:07.139224 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 01:00:07.139228 | orchestrator | Saturday 28 March 2026 00:58:57 +0000 (0:00:00.765) 0:11:17.683 ******** 2026-03-28 01:00:07.139232 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.139236 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.139240 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.139243 | orchestrator | 2026-03-28 01:00:07.139247 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 01:00:07.139251 | orchestrator | Saturday 28 March 2026 00:58:58 +0000 (0:00:00.750) 0:11:18.434 ******** 2026-03-28 01:00:07.139255 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139259 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.139262 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.139266 | orchestrator | 2026-03-28 01:00:07.139270 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 01:00:07.139277 | orchestrator | Saturday 28 March 2026 00:58:58 +0000 (0:00:00.333) 0:11:18.768 ******** 2026-03-28 01:00:07.139281 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139285 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.139289 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.139293 | orchestrator | 2026-03-28 01:00:07.139296 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 01:00:07.139300 | orchestrator | Saturday 28 March 2026 00:58:59 +0000 (0:00:00.604) 0:11:19.372 ******** 2026-03-28 01:00:07.139304 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.139308 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.139312 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.139315 | orchestrator | 2026-03-28 01:00:07.139319 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 01:00:07.139323 | orchestrator | Saturday 28 March 2026 00:58:59 +0000 (0:00:00.354) 0:11:19.726 ******** 2026-03-28 01:00:07.139327 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.139330 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.139334 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.139338 | orchestrator | 2026-03-28 01:00:07.139342 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 01:00:07.139346 | orchestrator | Saturday 28 March 2026 00:59:00 +0000 (0:00:00.372) 0:11:20.099 ******** 2026-03-28 01:00:07.139350 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.139353 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.139357 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.139361 | orchestrator | 2026-03-28 01:00:07.139365 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 01:00:07.139368 | orchestrator | Saturday 28 March 2026 00:59:00 +0000 (0:00:00.397) 0:11:20.497 ******** 2026-03-28 01:00:07.139372 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139376 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.139380 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.139384 | orchestrator | 2026-03-28 01:00:07.139387 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 01:00:07.139391 | orchestrator | Saturday 28 March 2026 00:59:01 +0000 (0:00:00.665) 0:11:21.162 ******** 2026-03-28 01:00:07.139395 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139399 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.139402 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.139406 | orchestrator | 2026-03-28 01:00:07.139410 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 01:00:07.139414 | orchestrator | Saturday 28 March 2026 00:59:01 +0000 (0:00:00.344) 0:11:21.506 ******** 2026-03-28 01:00:07.139418 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139422 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.139426 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.139429 | orchestrator | 2026-03-28 01:00:07.139433 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 01:00:07.139437 | orchestrator | Saturday 28 March 2026 00:59:02 +0000 (0:00:00.341) 0:11:21.848 ******** 2026-03-28 01:00:07.139441 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.139445 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.139449 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.139452 | orchestrator | 2026-03-28 01:00:07.139456 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 01:00:07.139463 | orchestrator | Saturday 28 March 2026 00:59:02 +0000 (0:00:00.372) 0:11:22.220 ******** 2026-03-28 01:00:07.139467 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.139470 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.139474 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.139478 | orchestrator | 2026-03-28 01:00:07.139482 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-28 01:00:07.139486 | orchestrator | Saturday 28 March 2026 00:59:03 +0000 (0:00:00.840) 0:11:23.061 ******** 2026-03-28 01:00:07.139493 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.139497 | orchestrator | 2026-03-28 01:00:07.139501 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 01:00:07.139508 | orchestrator | Saturday 28 March 2026 00:59:03 +0000 (0:00:00.568) 0:11:23.629 ******** 2026-03-28 01:00:07.139512 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:07.139515 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 01:00:07.139519 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 01:00:07.139523 | orchestrator | 2026-03-28 01:00:07.139527 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 01:00:07.139531 | orchestrator | Saturday 28 March 2026 00:59:06 +0000 (0:00:02.171) 0:11:25.800 ******** 2026-03-28 01:00:07.139535 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 01:00:07.139539 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 01:00:07.139542 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.139546 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 01:00:07.139550 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 01:00:07.139554 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.139558 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 01:00:07.139562 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 01:00:07.139565 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.139569 | orchestrator | 2026-03-28 01:00:07.139573 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-28 01:00:07.139577 | orchestrator | Saturday 28 March 2026 00:59:07 +0000 (0:00:01.478) 0:11:27.278 ******** 2026-03-28 01:00:07.139581 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139585 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.139589 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.139592 | orchestrator | 2026-03-28 01:00:07.139596 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-28 01:00:07.139600 | orchestrator | Saturday 28 March 2026 00:59:07 +0000 (0:00:00.345) 0:11:27.624 ******** 2026-03-28 01:00:07.139604 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.139607 | orchestrator | 2026-03-28 01:00:07.139611 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-28 01:00:07.139615 | orchestrator | Saturday 28 March 2026 00:59:08 +0000 (0:00:00.548) 0:11:28.173 ******** 2026-03-28 01:00:07.139619 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.139624 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.139628 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.139632 | orchestrator | 2026-03-28 01:00:07.139635 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-28 01:00:07.139639 | orchestrator | Saturday 28 March 2026 00:59:09 +0000 (0:00:01.439) 0:11:29.612 ******** 2026-03-28 01:00:07.139643 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:07.139647 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 01:00:07.139651 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:07.139655 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 01:00:07.139662 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:07.139666 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 01:00:07.139670 | orchestrator | 2026-03-28 01:00:07.139674 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 01:00:07.139678 | orchestrator | Saturday 28 March 2026 00:59:14 +0000 (0:00:04.851) 0:11:34.464 ******** 2026-03-28 01:00:07.139682 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:07.139685 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 01:00:07.139689 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:07.139693 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 01:00:07.139697 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:07.139701 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 01:00:07.139704 | orchestrator | 2026-03-28 01:00:07.139711 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 01:00:07.139715 | orchestrator | Saturday 28 March 2026 00:59:17 +0000 (0:00:02.652) 0:11:37.117 ******** 2026-03-28 01:00:07.139719 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 01:00:07.139723 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.139727 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 01:00:07.139730 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.139734 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 01:00:07.139738 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.139742 | orchestrator | 2026-03-28 01:00:07.139746 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-28 01:00:07.139753 | orchestrator | Saturday 28 March 2026 00:59:18 +0000 (0:00:01.361) 0:11:38.479 ******** 2026-03-28 01:00:07.139756 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-28 01:00:07.139760 | orchestrator | 2026-03-28 01:00:07.139764 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-28 01:00:07.139768 | orchestrator | Saturday 28 March 2026 00:59:18 +0000 (0:00:00.228) 0:11:38.707 ******** 2026-03-28 01:00:07.139772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 01:00:07.139777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 01:00:07.139781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 01:00:07.139784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 01:00:07.139788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 01:00:07.139792 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139796 | orchestrator | 2026-03-28 01:00:07.139800 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-28 01:00:07.139803 | orchestrator | Saturday 28 March 2026 00:59:20 +0000 (0:00:01.219) 0:11:39.927 ******** 2026-03-28 01:00:07.139807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 01:00:07.139811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 01:00:07.139815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 01:00:07.139822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 01:00:07.139826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 01:00:07.139830 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139834 | orchestrator | 2026-03-28 01:00:07.139838 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-28 01:00:07.139841 | orchestrator | Saturday 28 March 2026 00:59:20 +0000 (0:00:00.687) 0:11:40.614 ******** 2026-03-28 01:00:07.139845 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 01:00:07.139849 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 01:00:07.139853 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 01:00:07.139857 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 01:00:07.139861 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 01:00:07.139864 | orchestrator | 2026-03-28 01:00:07.139868 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-28 01:00:07.139872 | orchestrator | Saturday 28 March 2026 00:59:51 +0000 (0:00:30.684) 0:12:11.298 ******** 2026-03-28 01:00:07.139876 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139880 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.139884 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.139887 | orchestrator | 2026-03-28 01:00:07.139891 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-28 01:00:07.139895 | orchestrator | Saturday 28 March 2026 00:59:51 +0000 (0:00:00.365) 0:12:11.664 ******** 2026-03-28 01:00:07.139899 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.139903 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.139907 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.139910 | orchestrator | 2026-03-28 01:00:07.139914 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-28 01:00:07.139921 | orchestrator | Saturday 28 March 2026 00:59:52 +0000 (0:00:00.410) 0:12:12.075 ******** 2026-03-28 01:00:07.139925 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.139929 | orchestrator | 2026-03-28 01:00:07.139933 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-28 01:00:07.139937 | orchestrator | Saturday 28 March 2026 00:59:53 +0000 (0:00:00.882) 0:12:12.957 ******** 2026-03-28 01:00:07.139940 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.139944 | orchestrator | 2026-03-28 01:00:07.139951 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-28 01:00:07.139955 | orchestrator | Saturday 28 March 2026 00:59:53 +0000 (0:00:00.570) 0:12:13.528 ******** 2026-03-28 01:00:07.139959 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.139963 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.139966 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.139970 | orchestrator | 2026-03-28 01:00:07.139974 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-28 01:00:07.139978 | orchestrator | Saturday 28 March 2026 00:59:55 +0000 (0:00:01.307) 0:12:14.836 ******** 2026-03-28 01:00:07.139985 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.139989 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.139993 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.139996 | orchestrator | 2026-03-28 01:00:07.140000 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-28 01:00:07.140004 | orchestrator | Saturday 28 March 2026 00:59:56 +0000 (0:00:01.638) 0:12:16.474 ******** 2026-03-28 01:00:07.140008 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:00:07.140012 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:00:07.140015 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:00:07.140019 | orchestrator | 2026-03-28 01:00:07.140023 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-28 01:00:07.140027 | orchestrator | Saturday 28 March 2026 00:59:58 +0000 (0:00:01.826) 0:12:18.301 ******** 2026-03-28 01:00:07.140031 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.140035 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.140040 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 01:00:07.140046 | orchestrator | 2026-03-28 01:00:07.140052 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 01:00:07.140059 | orchestrator | Saturday 28 March 2026 01:00:01 +0000 (0:00:02.812) 0:12:21.113 ******** 2026-03-28 01:00:07.140064 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.140075 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.140081 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.140086 | orchestrator | 2026-03-28 01:00:07.140092 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 01:00:07.140098 | orchestrator | Saturday 28 March 2026 01:00:01 +0000 (0:00:00.427) 0:12:21.541 ******** 2026-03-28 01:00:07.140104 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:07.140110 | orchestrator | 2026-03-28 01:00:07.140116 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-28 01:00:07.140123 | orchestrator | Saturday 28 March 2026 01:00:02 +0000 (0:00:00.609) 0:12:22.150 ******** 2026-03-28 01:00:07.140129 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.140135 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.140141 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.140147 | orchestrator | 2026-03-28 01:00:07.140153 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-28 01:00:07.140156 | orchestrator | Saturday 28 March 2026 01:00:03 +0000 (0:00:00.826) 0:12:22.976 ******** 2026-03-28 01:00:07.140160 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.140164 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:07.140168 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:07.140184 | orchestrator | 2026-03-28 01:00:07.140189 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-28 01:00:07.140193 | orchestrator | Saturday 28 March 2026 01:00:03 +0000 (0:00:00.442) 0:12:23.419 ******** 2026-03-28 01:00:07.140197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:07.140201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:07.140205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:07.140208 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:07.140212 | orchestrator | 2026-03-28 01:00:07.140216 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-28 01:00:07.140220 | orchestrator | Saturday 28 March 2026 01:00:04 +0000 (0:00:00.978) 0:12:24.397 ******** 2026-03-28 01:00:07.140224 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:07.140232 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:07.140236 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:07.140240 | orchestrator | 2026-03-28 01:00:07.140244 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:00:07.140248 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-28 01:00:07.140252 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-28 01:00:07.140259 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-28 01:00:07.140263 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-28 01:00:07.140267 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-28 01:00:07.140274 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-28 01:00:07.140278 | orchestrator | 2026-03-28 01:00:07.140282 | orchestrator | 2026-03-28 01:00:07.140286 | orchestrator | 2026-03-28 01:00:07.140290 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:00:07.140294 | orchestrator | Saturday 28 March 2026 01:00:04 +0000 (0:00:00.275) 0:12:24.672 ******** 2026-03-28 01:00:07.140298 | orchestrator | =============================================================================== 2026-03-28 01:00:07.140301 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 55.88s 2026-03-28 01:00:07.140305 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 48.50s 2026-03-28 01:00:07.140309 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.68s 2026-03-28 01:00:07.140313 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.23s 2026-03-28 01:00:07.140317 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.94s 2026-03-28 01:00:07.140320 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.51s 2026-03-28 01:00:07.140324 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.46s 2026-03-28 01:00:07.140328 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 11.00s 2026-03-28 01:00:07.140332 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.82s 2026-03-28 01:00:07.140336 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.12s 2026-03-28 01:00:07.140340 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.57s 2026-03-28 01:00:07.140343 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.61s 2026-03-28 01:00:07.140347 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 6.35s 2026-03-28 01:00:07.140351 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.19s 2026-03-28 01:00:07.140355 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.85s 2026-03-28 01:00:07.140358 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.19s 2026-03-28 01:00:07.140362 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.98s 2026-03-28 01:00:07.140366 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.83s 2026-03-28 01:00:07.140370 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.75s 2026-03-28 01:00:07.140373 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.55s 2026-03-28 01:00:07.140377 | orchestrator | 2026-03-28 01:00:07 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:07.140384 | orchestrator | 2026-03-28 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:10.169531 | orchestrator | 2026-03-28 01:00:10 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:10.173195 | orchestrator | 2026-03-28 01:00:10 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:10.174907 | orchestrator | 2026-03-28 01:00:10 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:10.175115 | orchestrator | 2026-03-28 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:13.234366 | orchestrator | 2026-03-28 01:00:13 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:13.236651 | orchestrator | 2026-03-28 01:00:13 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:13.238271 | orchestrator | 2026-03-28 01:00:13 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:13.238359 | orchestrator | 2026-03-28 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:16.277758 | orchestrator | 2026-03-28 01:00:16 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:16.278790 | orchestrator | 2026-03-28 01:00:16 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:16.279648 | orchestrator | 2026-03-28 01:00:16 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:16.279719 | orchestrator | 2026-03-28 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:19.325875 | orchestrator | 2026-03-28 01:00:19 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:19.327479 | orchestrator | 2026-03-28 01:00:19 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:19.329836 | orchestrator | 2026-03-28 01:00:19 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:19.330675 | orchestrator | 2026-03-28 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:22.383401 | orchestrator | 2026-03-28 01:00:22 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:22.384464 | orchestrator | 2026-03-28 01:00:22 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:22.386887 | orchestrator | 2026-03-28 01:00:22 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:22.386935 | orchestrator | 2026-03-28 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:25.424554 | orchestrator | 2026-03-28 01:00:25 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:25.427580 | orchestrator | 2026-03-28 01:00:25 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:25.429830 | orchestrator | 2026-03-28 01:00:25 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:25.430269 | orchestrator | 2026-03-28 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:28.486574 | orchestrator | 2026-03-28 01:00:28 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:28.487933 | orchestrator | 2026-03-28 01:00:28 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:28.489619 | orchestrator | 2026-03-28 01:00:28 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:28.489698 | orchestrator | 2026-03-28 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:31.523858 | orchestrator | 2026-03-28 01:00:31 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:31.524614 | orchestrator | 2026-03-28 01:00:31 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:31.526296 | orchestrator | 2026-03-28 01:00:31 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:31.527270 | orchestrator | 2026-03-28 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:34.575922 | orchestrator | 2026-03-28 01:00:34 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:34.578306 | orchestrator | 2026-03-28 01:00:34 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:34.580612 | orchestrator | 2026-03-28 01:00:34 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:34.581501 | orchestrator | 2026-03-28 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:37.639246 | orchestrator | 2026-03-28 01:00:37 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:37.641630 | orchestrator | 2026-03-28 01:00:37 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:37.647203 | orchestrator | 2026-03-28 01:00:37 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:37.647313 | orchestrator | 2026-03-28 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:40.693986 | orchestrator | 2026-03-28 01:00:40 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:40.696444 | orchestrator | 2026-03-28 01:00:40 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:40.698203 | orchestrator | 2026-03-28 01:00:40 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:40.698254 | orchestrator | 2026-03-28 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:43.759945 | orchestrator | 2026-03-28 01:00:43 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:43.764537 | orchestrator | 2026-03-28 01:00:43 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:43.766218 | orchestrator | 2026-03-28 01:00:43 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:43.766296 | orchestrator | 2026-03-28 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:46.808354 | orchestrator | 2026-03-28 01:00:46 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:46.811009 | orchestrator | 2026-03-28 01:00:46 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:46.813441 | orchestrator | 2026-03-28 01:00:46 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:46.813512 | orchestrator | 2026-03-28 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:49.863772 | orchestrator | 2026-03-28 01:00:49 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:49.869943 | orchestrator | 2026-03-28 01:00:49 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:49.872104 | orchestrator | 2026-03-28 01:00:49 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:49.872156 | orchestrator | 2026-03-28 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:52.908766 | orchestrator | 2026-03-28 01:00:52 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:52.910412 | orchestrator | 2026-03-28 01:00:52 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:52.911641 | orchestrator | 2026-03-28 01:00:52 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:52.911684 | orchestrator | 2026-03-28 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:55.951425 | orchestrator | 2026-03-28 01:00:55 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:55.952139 | orchestrator | 2026-03-28 01:00:55 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:55.953586 | orchestrator | 2026-03-28 01:00:55 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:55.953631 | orchestrator | 2026-03-28 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:58.998439 | orchestrator | 2026-03-28 01:00:58 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:00:59.003308 | orchestrator | 2026-03-28 01:00:59 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:00:59.008385 | orchestrator | 2026-03-28 01:00:59 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:00:59.008472 | orchestrator | 2026-03-28 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:02.041336 | orchestrator | 2026-03-28 01:01:02 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:01:02.043046 | orchestrator | 2026-03-28 01:01:02 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:01:02.044747 | orchestrator | 2026-03-28 01:01:02 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:02.044817 | orchestrator | 2026-03-28 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:05.080645 | orchestrator | 2026-03-28 01:01:05 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state STARTED 2026-03-28 01:01:05.081943 | orchestrator | 2026-03-28 01:01:05 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:01:05.083234 | orchestrator | 2026-03-28 01:01:05 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:05.083281 | orchestrator | 2026-03-28 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:08.127985 | orchestrator | 2026-03-28 01:01:08.128055 | orchestrator | 2026-03-28 01:01:08.128062 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:01:08.128067 | orchestrator | 2026-03-28 01:01:08.128072 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:01:08.128077 | orchestrator | Saturday 28 March 2026 00:57:57 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-03-28 01:01:08.128081 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:08.128086 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:08.128091 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:08.128116 | orchestrator | 2026-03-28 01:01:08.128123 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:01:08.128127 | orchestrator | Saturday 28 March 2026 00:57:57 +0000 (0:00:00.309) 0:00:00.584 ******** 2026-03-28 01:01:08.128132 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-28 01:01:08.128137 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-28 01:01:08.128141 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-28 01:01:08.128145 | orchestrator | 2026-03-28 01:01:08.128149 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-28 01:01:08.128172 | orchestrator | 2026-03-28 01:01:08.128177 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 01:01:08.128181 | orchestrator | Saturday 28 March 2026 00:57:57 +0000 (0:00:00.444) 0:00:01.028 ******** 2026-03-28 01:01:08.128195 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:01:08.128199 | orchestrator | 2026-03-28 01:01:08.128203 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-28 01:01:08.128207 | orchestrator | Saturday 28 March 2026 00:57:58 +0000 (0:00:00.490) 0:00:01.518 ******** 2026-03-28 01:01:08.128211 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 01:01:08.128215 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 01:01:08.128219 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 01:01:08.128222 | orchestrator | 2026-03-28 01:01:08.128226 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-28 01:01:08.128230 | orchestrator | Saturday 28 March 2026 00:58:00 +0000 (0:00:01.752) 0:00:03.271 ******** 2026-03-28 01:01:08.128236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:01:08.128361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:01:08.128380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:01:08.128390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:01:08.128401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:01:08.128406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:01:08.128410 | orchestrator | 2026-03-28 01:01:08.128414 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 01:01:08.128418 | orchestrator | Saturday 28 March 2026 00:58:02 +0000 (0:00:02.169) 0:00:05.441 ******** 2026-03-28 01:01:08.128422 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:01:08.128426 | orchestrator | 2026-03-28 01:01:08.128430 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-28 01:01:08.128434 | orchestrator | Saturday 28 March 2026 00:58:02 +0000 (0:00:00.678) 0:00:06.120 ******** 2026-03-28 01:01:08.128444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:01:08.128456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:01:08.128460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:01:08.128464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:01:08.128473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:01:08.128484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:01:08.128488 | orchestrator | 2026-03-28 01:01:08.128492 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-28 01:01:08.128496 | orchestrator | Saturday 28 March 2026 00:58:05 +0000 (0:00:03.079) 0:00:09.199 ******** 2026-03-28 01:01:08.128500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 01:01:08.128504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 01:01:08.128509 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:08.128517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 01:01:08.128531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 01:01:08.128535 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:08.128539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 01:01:08.128543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 01:01:08.128547 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:08.128551 | orchestrator | 2026-03-28 01:01:08.128555 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-28 01:01:08.128559 | orchestrator | Saturday 28 March 2026 00:58:07 +0000 (0:00:01.512) 0:00:10.711 ******** 2026-03-28 01:01:08.128566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 01:01:08.128578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 01:01:08.128582 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:08.128586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 01:01:08.128590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 01:01:08.128594 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:08.128604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 01:01:08.128611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 01:01:08.128631 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:08.128635 | orchestrator | 2026-03-28 01:01:08.128639 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-28 01:01:08.128643 | orchestrator | Saturday 28 March 2026 00:58:08 +0000 (0:00:00.872) 0:00:11.584 ******** 2026-03-28 01:01:08.128647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:01:08.128651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:01:08.128656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:01:08.128669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:01:08.128676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:01:08.128680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:01:08.128688 | orchestrator | 2026-03-28 01:01:08.128692 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-28 01:01:08.128696 | orchestrator | Saturday 28 March 2026 00:58:11 +0000 (0:00:03.109) 0:00:14.694 ******** 2026-03-28 01:01:08.128700 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:08.128704 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:01:08.128707 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:01:08.128711 | orchestrator | 2026-03-28 01:01:08.128715 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-28 01:01:08.128719 | orchestrator | Saturday 28 March 2026 00:58:14 +0000 (0:00:02.669) 0:00:17.364 ******** 2026-03-28 01:01:08.128723 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:08.128726 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:01:08.128730 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:01:08.128734 | orchestrator | 2026-03-28 01:01:08.128738 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-28 01:01:08.128742 | orchestrator | Saturday 28 March 2026 00:58:16 +0000 (0:00:02.582) 0:00:19.947 ******** 2026-03-28 01:01:08.128749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:01:08.128756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:01:08.128761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:01:08.128765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:01:08.128779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:01:08.128840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:01:08.128848 | orchestrator | 2026-03-28 01:01:08.128855 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 01:01:08.128861 | orchestrator | Saturday 28 March 2026 00:58:18 +0000 (0:00:02.187) 0:00:22.135 ******** 2026-03-28 01:01:08.128865 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:08.128869 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:08.128873 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:08.128877 | orchestrator | 2026-03-28 01:01:08.128880 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 01:01:08.128884 | orchestrator | Saturday 28 March 2026 00:58:19 +0000 (0:00:00.330) 0:00:22.465 ******** 2026-03-28 01:01:08.128888 | orchestrator | 2026-03-28 01:01:08.128892 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 01:01:08.128896 | orchestrator | Saturday 28 March 2026 00:58:19 +0000 (0:00:00.067) 0:00:22.532 ******** 2026-03-28 01:01:08.128899 | orchestrator | 2026-03-28 01:01:08.128903 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 01:01:08.128912 | orchestrator | Saturday 28 March 2026 00:58:19 +0000 (0:00:00.065) 0:00:22.598 ******** 2026-03-28 01:01:08.128916 | orchestrator | 2026-03-28 01:01:08.128920 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-28 01:01:08.128924 | orchestrator | Saturday 28 March 2026 00:58:19 +0000 (0:00:00.067) 0:00:22.666 ******** 2026-03-28 01:01:08.128928 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:08.128931 | orchestrator | 2026-03-28 01:01:08.128935 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-28 01:01:08.128939 | orchestrator | Saturday 28 March 2026 00:58:20 +0000 (0:00:00.666) 0:00:23.332 ******** 2026-03-28 01:01:08.128943 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:08.128947 | orchestrator | 2026-03-28 01:01:08.128950 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-28 01:01:08.128954 | orchestrator | Saturday 28 March 2026 00:58:20 +0000 (0:00:00.242) 0:00:23.575 ******** 2026-03-28 01:01:08.128958 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:08.128962 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:01:08.128966 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:01:08.128970 | orchestrator | 2026-03-28 01:01:08.128973 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-28 01:01:08.128977 | orchestrator | Saturday 28 March 2026 00:59:25 +0000 (0:01:04.980) 0:01:28.555 ******** 2026-03-28 01:01:08.128981 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:08.128985 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:01:08.128988 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:01:08.128992 | orchestrator | 2026-03-28 01:01:08.128996 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 01:01:08.129000 | orchestrator | Saturday 28 March 2026 01:00:54 +0000 (0:01:28.696) 0:02:57.252 ******** 2026-03-28 01:01:08.129003 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:01:08.129007 | orchestrator | 2026-03-28 01:01:08.129011 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-28 01:01:08.129015 | orchestrator | Saturday 28 March 2026 01:00:54 +0000 (0:00:00.739) 0:02:57.991 ******** 2026-03-28 01:01:08.129019 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:08.129023 | orchestrator | 2026-03-28 01:01:08.129027 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-28 01:01:08.129031 | orchestrator | Saturday 28 March 2026 01:00:57 +0000 (0:00:02.742) 0:03:00.733 ******** 2026-03-28 01:01:08.129035 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:08.129038 | orchestrator | 2026-03-28 01:01:08.129042 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-28 01:01:08.129046 | orchestrator | Saturday 28 March 2026 01:00:59 +0000 (0:00:02.351) 0:03:03.084 ******** 2026-03-28 01:01:08.129050 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:08.129053 | orchestrator | 2026-03-28 01:01:08.129057 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-28 01:01:08.129061 | orchestrator | Saturday 28 March 2026 01:01:02 +0000 (0:00:03.008) 0:03:06.093 ******** 2026-03-28 01:01:08.129065 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:08.129069 | orchestrator | 2026-03-28 01:01:08.129076 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:01:08.129081 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:01:08.129087 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 01:01:08.129091 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 01:01:08.129139 | orchestrator | 2026-03-28 01:01:08.129145 | orchestrator | 2026-03-28 01:01:08.129149 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:01:08.129157 | orchestrator | Saturday 28 March 2026 01:01:05 +0000 (0:00:02.728) 0:03:08.822 ******** 2026-03-28 01:01:08.129161 | orchestrator | =============================================================================== 2026-03-28 01:01:08.129165 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 88.70s 2026-03-28 01:01:08.129169 | orchestrator | opensearch : Restart opensearch container ------------------------------ 64.98s 2026-03-28 01:01:08.129176 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.11s 2026-03-28 01:01:08.129180 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.08s 2026-03-28 01:01:08.129184 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.01s 2026-03-28 01:01:08.129188 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.74s 2026-03-28 01:01:08.129192 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.73s 2026-03-28 01:01:08.129196 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.67s 2026-03-28 01:01:08.129200 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.58s 2026-03-28 01:01:08.129203 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.35s 2026-03-28 01:01:08.129207 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.19s 2026-03-28 01:01:08.129211 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.17s 2026-03-28 01:01:08.129215 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.75s 2026-03-28 01:01:08.129219 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.51s 2026-03-28 01:01:08.129223 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.87s 2026-03-28 01:01:08.129227 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.74s 2026-03-28 01:01:08.129234 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2026-03-28 01:01:08.129241 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.67s 2026-03-28 01:01:08.129247 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-03-28 01:01:08.129254 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-03-28 01:01:08.129258 | orchestrator | 2026-03-28 01:01:08 | INFO  | Task e0d4a19b-0517-499f-a97a-5a2fe67f6c1a is in state SUCCESS 2026-03-28 01:01:08.129263 | orchestrator | 2026-03-28 01:01:08 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state STARTED 2026-03-28 01:01:08.130891 | orchestrator | 2026-03-28 01:01:08 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:08.130979 | orchestrator | 2026-03-28 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:11.177360 | orchestrator | 2026-03-28 01:01:11 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:11.182198 | orchestrator | 2026-03-28 01:01:11 | INFO  | Task 941ca173-8f1e-4e15-81f9-7ba5de26472c is in state SUCCESS 2026-03-28 01:01:11.184584 | orchestrator | 2026-03-28 01:01:11.184639 | orchestrator | 2026-03-28 01:01:11.184652 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-28 01:01:11.184665 | orchestrator | 2026-03-28 01:01:11.184677 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-28 01:01:11.184688 | orchestrator | Saturday 28 March 2026 00:57:56 +0000 (0:00:00.106) 0:00:00.106 ******** 2026-03-28 01:01:11.184700 | orchestrator | ok: [localhost] => { 2026-03-28 01:01:11.184713 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-28 01:01:11.184725 | orchestrator | } 2026-03-28 01:01:11.184737 | orchestrator | 2026-03-28 01:01:11.184748 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-28 01:01:11.184781 | orchestrator | Saturday 28 March 2026 00:57:56 +0000 (0:00:00.043) 0:00:00.150 ******** 2026-03-28 01:01:11.184794 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-28 01:01:11.184807 | orchestrator | ...ignoring 2026-03-28 01:01:11.184818 | orchestrator | 2026-03-28 01:01:11.184829 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-28 01:01:11.184841 | orchestrator | Saturday 28 March 2026 00:57:59 +0000 (0:00:02.895) 0:00:03.046 ******** 2026-03-28 01:01:11.184852 | orchestrator | skipping: [localhost] 2026-03-28 01:01:11.185040 | orchestrator | 2026-03-28 01:01:11.185054 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-28 01:01:11.185065 | orchestrator | Saturday 28 March 2026 00:57:59 +0000 (0:00:00.053) 0:00:03.099 ******** 2026-03-28 01:01:11.185076 | orchestrator | ok: [localhost] 2026-03-28 01:01:11.185087 | orchestrator | 2026-03-28 01:01:11.185123 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:01:11.185158 | orchestrator | 2026-03-28 01:01:11.185170 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:01:11.185181 | orchestrator | Saturday 28 March 2026 00:57:59 +0000 (0:00:00.154) 0:00:03.253 ******** 2026-03-28 01:01:11.185192 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:11.185203 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:11.185214 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:11.185225 | orchestrator | 2026-03-28 01:01:11.185236 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:01:11.185247 | orchestrator | Saturday 28 March 2026 00:58:00 +0000 (0:00:00.398) 0:00:03.652 ******** 2026-03-28 01:01:11.185258 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-28 01:01:11.185269 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-28 01:01:11.185281 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-28 01:01:11.185291 | orchestrator | 2026-03-28 01:01:11.185302 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-28 01:01:11.185313 | orchestrator | 2026-03-28 01:01:11.185324 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-28 01:01:11.185344 | orchestrator | Saturday 28 March 2026 00:58:01 +0000 (0:00:00.984) 0:00:04.636 ******** 2026-03-28 01:01:11.185356 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 01:01:11.185368 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 01:01:11.185379 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 01:01:11.185390 | orchestrator | 2026-03-28 01:01:11.185401 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 01:01:11.185412 | orchestrator | Saturday 28 March 2026 00:58:01 +0000 (0:00:00.377) 0:00:05.013 ******** 2026-03-28 01:01:11.185423 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:01:11.185435 | orchestrator | 2026-03-28 01:01:11.185446 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-28 01:01:11.185456 | orchestrator | Saturday 28 March 2026 00:58:02 +0000 (0:00:00.602) 0:00:05.616 ******** 2026-03-28 01:01:11.185492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:01:11.185530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:01:11.185544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:01:11.185564 | orchestrator | 2026-03-28 01:01:11.185583 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-28 01:01:11.185594 | orchestrator | Saturday 28 March 2026 00:58:06 +0000 (0:00:03.709) 0:00:09.325 ******** 2026-03-28 01:01:11.185606 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:11.185617 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.185636 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.185655 | orchestrator | 2026-03-28 01:01:11.185677 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-28 01:01:11.185696 | orchestrator | Saturday 28 March 2026 00:58:06 +0000 (0:00:00.940) 0:00:10.266 ******** 2026-03-28 01:01:11.185716 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.185737 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.185756 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:11.185774 | orchestrator | 2026-03-28 01:01:11.185794 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-28 01:01:11.185813 | orchestrator | Saturday 28 March 2026 00:58:08 +0000 (0:00:01.658) 0:00:11.924 ******** 2026-03-28 01:01:11.185846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:01:11.185886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:01:11.185928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:01:11.185950 | orchestrator | 2026-03-28 01:01:11.185974 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-28 01:01:11.185998 | orchestrator | Saturday 28 March 2026 00:58:12 +0000 (0:00:04.129) 0:00:16.054 ******** 2026-03-28 01:01:11.186078 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.186135 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.186156 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:11.186175 | orchestrator | 2026-03-28 01:01:11.186189 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-28 01:01:11.186200 | orchestrator | Saturday 28 March 2026 00:58:14 +0000 (0:00:01.349) 0:00:17.404 ******** 2026-03-28 01:01:11.186221 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:01:11.186233 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:01:11.186243 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:11.186254 | orchestrator | 2026-03-28 01:01:11.186265 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 01:01:11.186276 | orchestrator | Saturday 28 March 2026 00:58:18 +0000 (0:00:04.545) 0:00:21.949 ******** 2026-03-28 01:01:11.186288 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:01:11.186299 | orchestrator | 2026-03-28 01:01:11.186310 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-28 01:01:11.186321 | orchestrator | Saturday 28 March 2026 00:58:19 +0000 (0:00:00.540) 0:00:22.490 ******** 2026-03-28 01:01:11.186345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:01:11.186359 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.186377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:01:11.186396 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:11.186415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:01:11.186428 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.186439 | orchestrator | 2026-03-28 01:01:11.186450 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-28 01:01:11.186461 | orchestrator | Saturday 28 March 2026 00:58:22 +0000 (0:00:03.126) 0:00:25.616 ******** 2026-03-28 01:01:11.186478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:01:11.186496 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:11.186515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:01:11.186528 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.186544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:01:11.186562 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.186573 | orchestrator | 2026-03-28 01:01:11.186584 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-28 01:01:11.186595 | orchestrator | Saturday 28 March 2026 00:58:25 +0000 (0:00:03.595) 0:00:29.211 ******** 2026-03-28 01:01:11.186613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:01:11.186625 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.186638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:01:11.186660 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.186672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:01:11.186684 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:11.186695 | orchestrator | 2026-03-28 01:01:11.186706 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-28 01:01:11.186717 | orchestrator | Saturday 28 March 2026 00:58:29 +0000 (0:00:03.589) 0:00:32.801 ******** 2026-03-28 01:01:11.186737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:01:11.186768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:01:11.186789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:01:11.186809 | orchestrator | 2026-03-28 01:01:11.186820 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-28 01:01:11.186831 | orchestrator | Saturday 28 March 2026 00:58:33 +0000 (0:00:03.971) 0:00:36.772 ******** 2026-03-28 01:01:11.186842 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:11.186853 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:01:11.186865 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:01:11.186875 | orchestrator | 2026-03-28 01:01:11.186887 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-28 01:01:11.186898 | orchestrator | Saturday 28 March 2026 00:58:34 +0000 (0:00:01.271) 0:00:38.044 ******** 2026-03-28 01:01:11.186909 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:11.186920 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:11.186931 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:11.186942 | orchestrator | 2026-03-28 01:01:11.186953 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-28 01:01:11.186969 | orchestrator | Saturday 28 March 2026 00:58:35 +0000 (0:00:00.444) 0:00:38.488 ******** 2026-03-28 01:01:11.186980 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:11.186991 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:11.187002 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:11.187013 | orchestrator | 2026-03-28 01:01:11.187024 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-28 01:01:11.187035 | orchestrator | Saturday 28 March 2026 00:58:35 +0000 (0:00:00.332) 0:00:38.821 ******** 2026-03-28 01:01:11.187047 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-28 01:01:11.187059 | orchestrator | ...ignoring 2026-03-28 01:01:11.187070 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-28 01:01:11.187081 | orchestrator | ...ignoring 2026-03-28 01:01:11.187219 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-28 01:01:11.187236 | orchestrator | ...ignoring 2026-03-28 01:01:11.187247 | orchestrator | 2026-03-28 01:01:11.187258 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-28 01:01:11.187269 | orchestrator | Saturday 28 March 2026 00:58:46 +0000 (0:00:10.817) 0:00:49.639 ******** 2026-03-28 01:01:11.187280 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:11.187291 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:11.187301 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:11.187312 | orchestrator | 2026-03-28 01:01:11.187323 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-28 01:01:11.187334 | orchestrator | Saturday 28 March 2026 00:58:46 +0000 (0:00:00.488) 0:00:50.128 ******** 2026-03-28 01:01:11.187345 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:11.187356 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.187367 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.187378 | orchestrator | 2026-03-28 01:01:11.187388 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-28 01:01:11.187399 | orchestrator | Saturday 28 March 2026 00:58:47 +0000 (0:00:00.794) 0:00:50.922 ******** 2026-03-28 01:01:11.187410 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:11.187421 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.187432 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.187443 | orchestrator | 2026-03-28 01:01:11.187454 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-28 01:01:11.187464 | orchestrator | Saturday 28 March 2026 00:58:48 +0000 (0:00:00.467) 0:00:51.389 ******** 2026-03-28 01:01:11.187475 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:11.187486 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.187504 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.187515 | orchestrator | 2026-03-28 01:01:11.187526 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-28 01:01:11.187545 | orchestrator | Saturday 28 March 2026 00:58:48 +0000 (0:00:00.487) 0:00:51.877 ******** 2026-03-28 01:01:11.187557 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:11.187567 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:11.187578 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:11.187589 | orchestrator | 2026-03-28 01:01:11.187600 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-28 01:01:11.187611 | orchestrator | Saturday 28 March 2026 00:58:49 +0000 (0:00:00.495) 0:00:52.372 ******** 2026-03-28 01:01:11.187622 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:11.187633 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.187644 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.187655 | orchestrator | 2026-03-28 01:01:11.187666 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 01:01:11.187677 | orchestrator | Saturday 28 March 2026 00:58:49 +0000 (0:00:00.711) 0:00:53.084 ******** 2026-03-28 01:01:11.187688 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.187700 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.187711 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-28 01:01:11.187722 | orchestrator | 2026-03-28 01:01:11.187733 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-28 01:01:11.187744 | orchestrator | Saturday 28 March 2026 00:58:50 +0000 (0:00:00.412) 0:00:53.497 ******** 2026-03-28 01:01:11.187754 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:11.187764 | orchestrator | 2026-03-28 01:01:11.187774 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-28 01:01:11.187784 | orchestrator | Saturday 28 March 2026 00:59:00 +0000 (0:00:10.664) 0:01:04.162 ******** 2026-03-28 01:01:11.187793 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:11.187803 | orchestrator | 2026-03-28 01:01:11.187813 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 01:01:11.187823 | orchestrator | Saturday 28 March 2026 00:59:01 +0000 (0:00:00.159) 0:01:04.321 ******** 2026-03-28 01:01:11.187833 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:11.187842 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.187852 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.187862 | orchestrator | 2026-03-28 01:01:11.187872 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-28 01:01:11.187882 | orchestrator | Saturday 28 March 2026 00:59:02 +0000 (0:00:01.149) 0:01:05.471 ******** 2026-03-28 01:01:11.187891 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:11.187901 | orchestrator | 2026-03-28 01:01:11.187911 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-28 01:01:11.187921 | orchestrator | Saturday 28 March 2026 00:59:10 +0000 (0:00:08.293) 0:01:13.764 ******** 2026-03-28 01:01:11.187931 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:11.187941 | orchestrator | 2026-03-28 01:01:11.187950 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-28 01:01:11.187960 | orchestrator | Saturday 28 March 2026 00:59:13 +0000 (0:00:02.611) 0:01:16.375 ******** 2026-03-28 01:01:11.187976 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:11.187986 | orchestrator | 2026-03-28 01:01:11.187996 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-28 01:01:11.188006 | orchestrator | Saturday 28 March 2026 00:59:15 +0000 (0:00:02.846) 0:01:19.221 ******** 2026-03-28 01:01:11.188016 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:11.188025 | orchestrator | 2026-03-28 01:01:11.188035 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-28 01:01:11.188045 | orchestrator | Saturday 28 March 2026 00:59:16 +0000 (0:00:00.127) 0:01:19.349 ******** 2026-03-28 01:01:11.188055 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:11.188071 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.188080 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.188090 | orchestrator | 2026-03-28 01:01:11.188120 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-28 01:01:11.188130 | orchestrator | Saturday 28 March 2026 00:59:16 +0000 (0:00:00.349) 0:01:19.699 ******** 2026-03-28 01:01:11.188140 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:11.188150 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-28 01:01:11.188159 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:01:11.188169 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:01:11.188179 | orchestrator | 2026-03-28 01:01:11.188188 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-28 01:01:11.188198 | orchestrator | skipping: no hosts matched 2026-03-28 01:01:11.188207 | orchestrator | 2026-03-28 01:01:11.188217 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-28 01:01:11.188227 | orchestrator | 2026-03-28 01:01:11.188236 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 01:01:11.188246 | orchestrator | Saturday 28 March 2026 00:59:17 +0000 (0:00:00.616) 0:01:20.315 ******** 2026-03-28 01:01:11.188256 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:01:11.188266 | orchestrator | 2026-03-28 01:01:11.188275 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 01:01:11.188285 | orchestrator | Saturday 28 March 2026 00:59:40 +0000 (0:00:23.463) 0:01:43.779 ******** 2026-03-28 01:01:11.188295 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:11.188305 | orchestrator | 2026-03-28 01:01:11.188314 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 01:01:11.188324 | orchestrator | Saturday 28 March 2026 00:59:51 +0000 (0:00:10.647) 0:01:54.427 ******** 2026-03-28 01:01:11.188334 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:11.188343 | orchestrator | 2026-03-28 01:01:11.188353 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-28 01:01:11.188363 | orchestrator | 2026-03-28 01:01:11.188373 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 01:01:11.188383 | orchestrator | Saturday 28 March 2026 00:59:53 +0000 (0:00:02.688) 0:01:57.116 ******** 2026-03-28 01:01:11.188392 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:01:11.188402 | orchestrator | 2026-03-28 01:01:11.188412 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 01:01:11.188427 | orchestrator | Saturday 28 March 2026 01:00:14 +0000 (0:00:20.390) 0:02:17.506 ******** 2026-03-28 01:01:11.188437 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:11.188447 | orchestrator | 2026-03-28 01:01:11.188461 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 01:01:11.188476 | orchestrator | Saturday 28 March 2026 01:00:30 +0000 (0:00:16.654) 0:02:34.161 ******** 2026-03-28 01:01:11.188493 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:11.188508 | orchestrator | 2026-03-28 01:01:11.188525 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-28 01:01:11.188541 | orchestrator | 2026-03-28 01:01:11.188556 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 01:01:11.188572 | orchestrator | Saturday 28 March 2026 01:00:33 +0000 (0:00:03.090) 0:02:37.252 ******** 2026-03-28 01:01:11.188588 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:11.188605 | orchestrator | 2026-03-28 01:01:11.188621 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 01:01:11.188634 | orchestrator | Saturday 28 March 2026 01:00:46 +0000 (0:00:12.677) 0:02:49.930 ******** 2026-03-28 01:01:11.188643 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:11.188653 | orchestrator | 2026-03-28 01:01:11.188663 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 01:01:11.188672 | orchestrator | Saturday 28 March 2026 01:00:51 +0000 (0:00:04.650) 0:02:54.580 ******** 2026-03-28 01:01:11.188690 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:11.188700 | orchestrator | 2026-03-28 01:01:11.188710 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-28 01:01:11.188720 | orchestrator | 2026-03-28 01:01:11.188729 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-28 01:01:11.188739 | orchestrator | Saturday 28 March 2026 01:00:54 +0000 (0:00:02.955) 0:02:57.535 ******** 2026-03-28 01:01:11.188748 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:01:11.188758 | orchestrator | 2026-03-28 01:01:11.188767 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-28 01:01:11.188777 | orchestrator | Saturday 28 March 2026 01:00:54 +0000 (0:00:00.599) 0:02:58.135 ******** 2026-03-28 01:01:11.188791 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.188806 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.188820 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:11.188836 | orchestrator | 2026-03-28 01:01:11.188852 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-28 01:01:11.188868 | orchestrator | Saturday 28 March 2026 01:00:57 +0000 (0:00:02.704) 0:03:00.839 ******** 2026-03-28 01:01:11.188881 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.188891 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.188900 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:11.188910 | orchestrator | 2026-03-28 01:01:11.188920 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-28 01:01:11.188930 | orchestrator | Saturday 28 March 2026 01:00:59 +0000 (0:00:02.353) 0:03:03.192 ******** 2026-03-28 01:01:11.188939 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.188955 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.188965 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:11.188974 | orchestrator | 2026-03-28 01:01:11.188984 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-28 01:01:11.188994 | orchestrator | Saturday 28 March 2026 01:01:02 +0000 (0:00:02.392) 0:03:05.585 ******** 2026-03-28 01:01:11.189003 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.189013 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.189023 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:11.189032 | orchestrator | 2026-03-28 01:01:11.189042 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-28 01:01:11.189052 | orchestrator | Saturday 28 March 2026 01:01:04 +0000 (0:00:02.505) 0:03:08.091 ******** 2026-03-28 01:01:11.189061 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:11.189071 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:11.189081 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:11.189091 | orchestrator | 2026-03-28 01:01:11.189159 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-28 01:01:11.189169 | orchestrator | Saturday 28 March 2026 01:01:08 +0000 (0:00:03.311) 0:03:11.403 ******** 2026-03-28 01:01:11.189179 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:11.189188 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:11.189198 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:11.189207 | orchestrator | 2026-03-28 01:01:11.189217 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:01:11.189227 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-28 01:01:11.189237 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-28 01:01:11.189248 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-28 01:01:11.189258 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-28 01:01:11.189279 | orchestrator | 2026-03-28 01:01:11.189289 | orchestrator | 2026-03-28 01:01:11.189299 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:01:11.189308 | orchestrator | Saturday 28 March 2026 01:01:08 +0000 (0:00:00.248) 0:03:11.652 ******** 2026-03-28 01:01:11.189318 | orchestrator | =============================================================================== 2026-03-28 01:01:11.189327 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 43.85s 2026-03-28 01:01:11.189337 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 27.30s 2026-03-28 01:01:11.189354 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.68s 2026-03-28 01:01:11.189362 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.82s 2026-03-28 01:01:11.189370 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.66s 2026-03-28 01:01:11.189378 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.29s 2026-03-28 01:01:11.189386 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.78s 2026-03-28 01:01:11.189394 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.65s 2026-03-28 01:01:11.189402 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.55s 2026-03-28 01:01:11.189410 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.13s 2026-03-28 01:01:11.189418 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.97s 2026-03-28 01:01:11.189426 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.71s 2026-03-28 01:01:11.189434 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.60s 2026-03-28 01:01:11.189441 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.59s 2026-03-28 01:01:11.189449 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.31s 2026-03-28 01:01:11.189457 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.13s 2026-03-28 01:01:11.189465 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.96s 2026-03-28 01:01:11.189473 | orchestrator | Check MariaDB service --------------------------------------------------- 2.90s 2026-03-28 01:01:11.189481 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.85s 2026-03-28 01:01:11.189489 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.70s 2026-03-28 01:01:11.189497 | orchestrator | 2026-03-28 01:01:11 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:11.189505 | orchestrator | 2026-03-28 01:01:11 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:11.189514 | orchestrator | 2026-03-28 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:14.252687 | orchestrator | 2026-03-28 01:01:14 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:14.252785 | orchestrator | 2026-03-28 01:01:14 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:14.254943 | orchestrator | 2026-03-28 01:01:14 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:14.255024 | orchestrator | 2026-03-28 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:17.301875 | orchestrator | 2026-03-28 01:01:17 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:17.303868 | orchestrator | 2026-03-28 01:01:17 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:17.306617 | orchestrator | 2026-03-28 01:01:17 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:17.306702 | orchestrator | 2026-03-28 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:20.351118 | orchestrator | 2026-03-28 01:01:20 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:20.352509 | orchestrator | 2026-03-28 01:01:20 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:20.354678 | orchestrator | 2026-03-28 01:01:20 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:20.355772 | orchestrator | 2026-03-28 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:23.398066 | orchestrator | 2026-03-28 01:01:23 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:23.398923 | orchestrator | 2026-03-28 01:01:23 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:23.402529 | orchestrator | 2026-03-28 01:01:23 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:23.402624 | orchestrator | 2026-03-28 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:26.453544 | orchestrator | 2026-03-28 01:01:26 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:26.454537 | orchestrator | 2026-03-28 01:01:26 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:26.456774 | orchestrator | 2026-03-28 01:01:26 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:26.456820 | orchestrator | 2026-03-28 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:29.489406 | orchestrator | 2026-03-28 01:01:29 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:29.490179 | orchestrator | 2026-03-28 01:01:29 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:29.491670 | orchestrator | 2026-03-28 01:01:29 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:29.491715 | orchestrator | 2026-03-28 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:32.526338 | orchestrator | 2026-03-28 01:01:32 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:32.526766 | orchestrator | 2026-03-28 01:01:32 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:32.527222 | orchestrator | 2026-03-28 01:01:32 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:32.527275 | orchestrator | 2026-03-28 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:35.562596 | orchestrator | 2026-03-28 01:01:35 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:35.564244 | orchestrator | 2026-03-28 01:01:35 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:35.565960 | orchestrator | 2026-03-28 01:01:35 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:35.566066 | orchestrator | 2026-03-28 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:38.611600 | orchestrator | 2026-03-28 01:01:38 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:38.611718 | orchestrator | 2026-03-28 01:01:38 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:38.612324 | orchestrator | 2026-03-28 01:01:38 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:38.612381 | orchestrator | 2026-03-28 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:41.643676 | orchestrator | 2026-03-28 01:01:41 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:41.643798 | orchestrator | 2026-03-28 01:01:41 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:41.644469 | orchestrator | 2026-03-28 01:01:41 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:41.644491 | orchestrator | 2026-03-28 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:44.676582 | orchestrator | 2026-03-28 01:01:44 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:44.677881 | orchestrator | 2026-03-28 01:01:44 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:44.679106 | orchestrator | 2026-03-28 01:01:44 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:44.679148 | orchestrator | 2026-03-28 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:47.723377 | orchestrator | 2026-03-28 01:01:47 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:47.723476 | orchestrator | 2026-03-28 01:01:47 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:47.724542 | orchestrator | 2026-03-28 01:01:47 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:47.724629 | orchestrator | 2026-03-28 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:50.774864 | orchestrator | 2026-03-28 01:01:50 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:50.778216 | orchestrator | 2026-03-28 01:01:50 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:50.780215 | orchestrator | 2026-03-28 01:01:50 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:50.780537 | orchestrator | 2026-03-28 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:53.831190 | orchestrator | 2026-03-28 01:01:53 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:53.833524 | orchestrator | 2026-03-28 01:01:53 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:53.836893 | orchestrator | 2026-03-28 01:01:53 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:53.836955 | orchestrator | 2026-03-28 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:56.873365 | orchestrator | 2026-03-28 01:01:56 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:56.873449 | orchestrator | 2026-03-28 01:01:56 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:56.874595 | orchestrator | 2026-03-28 01:01:56 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:56.874621 | orchestrator | 2026-03-28 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:59.915950 | orchestrator | 2026-03-28 01:01:59 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:01:59.917718 | orchestrator | 2026-03-28 01:01:59 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:01:59.920325 | orchestrator | 2026-03-28 01:01:59 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:01:59.920787 | orchestrator | 2026-03-28 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:02.963398 | orchestrator | 2026-03-28 01:02:02 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:02.966292 | orchestrator | 2026-03-28 01:02:02 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:02:02.967845 | orchestrator | 2026-03-28 01:02:02 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:02.967895 | orchestrator | 2026-03-28 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:06.003970 | orchestrator | 2026-03-28 01:02:06 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:06.004586 | orchestrator | 2026-03-28 01:02:06 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:02:06.006804 | orchestrator | 2026-03-28 01:02:06 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:06.006846 | orchestrator | 2026-03-28 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:09.076780 | orchestrator | 2026-03-28 01:02:09 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:09.076854 | orchestrator | 2026-03-28 01:02:09 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:02:09.076877 | orchestrator | 2026-03-28 01:02:09 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:09.076883 | orchestrator | 2026-03-28 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:12.108817 | orchestrator | 2026-03-28 01:02:12 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:12.109960 | orchestrator | 2026-03-28 01:02:12 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:02:12.111733 | orchestrator | 2026-03-28 01:02:12 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:12.111797 | orchestrator | 2026-03-28 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:15.145645 | orchestrator | 2026-03-28 01:02:15 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:15.147917 | orchestrator | 2026-03-28 01:02:15 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:02:15.149713 | orchestrator | 2026-03-28 01:02:15 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:15.149752 | orchestrator | 2026-03-28 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:18.196993 | orchestrator | 2026-03-28 01:02:18 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:18.198287 | orchestrator | 2026-03-28 01:02:18 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:02:18.201069 | orchestrator | 2026-03-28 01:02:18 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:18.201109 | orchestrator | 2026-03-28 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:21.255290 | orchestrator | 2026-03-28 01:02:21 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:21.257404 | orchestrator | 2026-03-28 01:02:21 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:02:21.257935 | orchestrator | 2026-03-28 01:02:21 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:21.257985 | orchestrator | 2026-03-28 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:24.302156 | orchestrator | 2026-03-28 01:02:24 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:24.304090 | orchestrator | 2026-03-28 01:02:24 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state STARTED 2026-03-28 01:02:24.306242 | orchestrator | 2026-03-28 01:02:24 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:24.306274 | orchestrator | 2026-03-28 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:27.366177 | orchestrator | 2026-03-28 01:02:27 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state STARTED 2026-03-28 01:02:27.368084 | orchestrator | 2026-03-28 01:02:27 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:27.373130 | orchestrator | 2026-03-28 01:02:27 | INFO  | Task 71ab5e07-8098-49fd-93b3-678ef820dcfe is in state SUCCESS 2026-03-28 01:02:27.374288 | orchestrator | 2026-03-28 01:02:27.374343 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 01:02:27.374365 | orchestrator | 2.16.14 2026-03-28 01:02:27.374454 | orchestrator | 2026-03-28 01:02:27.374468 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-28 01:02:27.374481 | orchestrator | 2026-03-28 01:02:27.374492 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 01:02:27.374504 | orchestrator | Saturday 28 March 2026 01:00:10 +0000 (0:00:00.684) 0:00:00.684 ******** 2026-03-28 01:02:27.374516 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:02:27.374528 | orchestrator | 2026-03-28 01:02:27.374540 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 01:02:27.374551 | orchestrator | Saturday 28 March 2026 01:00:11 +0000 (0:00:00.725) 0:00:01.409 ******** 2026-03-28 01:02:27.374563 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.374575 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.374632 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.374644 | orchestrator | 2026-03-28 01:02:27.375124 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 01:02:27.375139 | orchestrator | Saturday 28 March 2026 01:00:12 +0000 (0:00:00.626) 0:00:02.036 ******** 2026-03-28 01:02:27.375150 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.375162 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.375172 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.375186 | orchestrator | 2026-03-28 01:02:27.375205 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 01:02:27.375229 | orchestrator | Saturday 28 March 2026 01:00:12 +0000 (0:00:00.396) 0:00:02.433 ******** 2026-03-28 01:02:27.375253 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.375270 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.375287 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.375635 | orchestrator | 2026-03-28 01:02:27.375676 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 01:02:27.375689 | orchestrator | Saturday 28 March 2026 01:00:13 +0000 (0:00:00.934) 0:00:03.368 ******** 2026-03-28 01:02:27.375701 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.375712 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.375723 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.375734 | orchestrator | 2026-03-28 01:02:27.375745 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 01:02:27.375761 | orchestrator | Saturday 28 March 2026 01:00:13 +0000 (0:00:00.315) 0:00:03.683 ******** 2026-03-28 01:02:27.375780 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.375885 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.375910 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.375928 | orchestrator | 2026-03-28 01:02:27.375947 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 01:02:27.375966 | orchestrator | Saturday 28 March 2026 01:00:14 +0000 (0:00:00.397) 0:00:04.081 ******** 2026-03-28 01:02:27.375985 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.376001 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.376073 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.376090 | orchestrator | 2026-03-28 01:02:27.376107 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 01:02:27.376126 | orchestrator | Saturday 28 March 2026 01:00:14 +0000 (0:00:00.378) 0:00:04.459 ******** 2026-03-28 01:02:27.376145 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.376165 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.376183 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.376194 | orchestrator | 2026-03-28 01:02:27.376205 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 01:02:27.376216 | orchestrator | Saturday 28 March 2026 01:00:15 +0000 (0:00:00.605) 0:00:05.064 ******** 2026-03-28 01:02:27.376227 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.376238 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.376249 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.376260 | orchestrator | 2026-03-28 01:02:27.376271 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 01:02:27.376281 | orchestrator | Saturday 28 March 2026 01:00:15 +0000 (0:00:00.310) 0:00:05.375 ******** 2026-03-28 01:02:27.376292 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 01:02:27.376306 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:02:27.376325 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:02:27.376343 | orchestrator | 2026-03-28 01:02:27.376361 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 01:02:27.376379 | orchestrator | Saturday 28 March 2026 01:00:16 +0000 (0:00:00.674) 0:00:06.050 ******** 2026-03-28 01:02:27.376395 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.376413 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.376430 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.376448 | orchestrator | 2026-03-28 01:02:27.376466 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 01:02:27.376485 | orchestrator | Saturday 28 March 2026 01:00:16 +0000 (0:00:00.495) 0:00:06.545 ******** 2026-03-28 01:02:27.376503 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 01:02:27.376521 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:02:27.376541 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:02:27.376562 | orchestrator | 2026-03-28 01:02:27.376580 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 01:02:27.376600 | orchestrator | Saturday 28 March 2026 01:00:18 +0000 (0:00:02.198) 0:00:08.743 ******** 2026-03-28 01:02:27.376619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 01:02:27.376638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 01:02:27.376659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 01:02:27.376680 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.376699 | orchestrator | 2026-03-28 01:02:27.376787 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 01:02:27.376802 | orchestrator | Saturday 28 March 2026 01:00:19 +0000 (0:00:00.713) 0:00:09.456 ******** 2026-03-28 01:02:27.376815 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.376830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.376841 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.376866 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.376878 | orchestrator | 2026-03-28 01:02:27.376889 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 01:02:27.376900 | orchestrator | Saturday 28 March 2026 01:00:20 +0000 (0:00:00.918) 0:00:10.374 ******** 2026-03-28 01:02:27.376934 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.376958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.376979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.377000 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.377099 | orchestrator | 2026-03-28 01:02:27.377119 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 01:02:27.377138 | orchestrator | Saturday 28 March 2026 01:00:20 +0000 (0:00:00.387) 0:00:10.761 ******** 2026-03-28 01:02:27.377161 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '45c0ea2460d3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 01:00:17.235537', 'end': '2026-03-28 01:00:17.270743', 'delta': '0:00:00.035206', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['45c0ea2460d3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 01:02:27.377185 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8b9de3ad6fbb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 01:00:17.993165', 'end': '2026-03-28 01:00:18.038116', 'delta': '0:00:00.044951', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8b9de3ad6fbb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 01:02:27.377251 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'df6a25d5a7c9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 01:00:18.598550', 'end': '2026-03-28 01:00:18.644940', 'delta': '0:00:00.046390', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['df6a25d5a7c9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 01:02:27.377277 | orchestrator | 2026-03-28 01:02:27.377288 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 01:02:27.377299 | orchestrator | Saturday 28 March 2026 01:00:21 +0000 (0:00:00.225) 0:00:10.987 ******** 2026-03-28 01:02:27.377311 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.377322 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.377332 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.377343 | orchestrator | 2026-03-28 01:02:27.377354 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 01:02:27.377365 | orchestrator | Saturday 28 March 2026 01:00:21 +0000 (0:00:00.468) 0:00:11.456 ******** 2026-03-28 01:02:27.377376 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-28 01:02:27.377387 | orchestrator | 2026-03-28 01:02:27.377419 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 01:02:27.377430 | orchestrator | Saturday 28 March 2026 01:00:23 +0000 (0:00:01.834) 0:00:13.290 ******** 2026-03-28 01:02:27.377441 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.377452 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.377463 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.377474 | orchestrator | 2026-03-28 01:02:27.377485 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 01:02:27.377496 | orchestrator | Saturday 28 March 2026 01:00:23 +0000 (0:00:00.331) 0:00:13.622 ******** 2026-03-28 01:02:27.377506 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.377517 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.377528 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.377539 | orchestrator | 2026-03-28 01:02:27.377550 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 01:02:27.377561 | orchestrator | Saturday 28 March 2026 01:00:24 +0000 (0:00:00.422) 0:00:14.045 ******** 2026-03-28 01:02:27.377572 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.377583 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.377594 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.377605 | orchestrator | 2026-03-28 01:02:27.377615 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 01:02:27.377625 | orchestrator | Saturday 28 March 2026 01:00:24 +0000 (0:00:00.528) 0:00:14.573 ******** 2026-03-28 01:02:27.377635 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.377644 | orchestrator | 2026-03-28 01:02:27.377654 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 01:02:27.377664 | orchestrator | Saturday 28 March 2026 01:00:24 +0000 (0:00:00.152) 0:00:14.726 ******** 2026-03-28 01:02:27.377674 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.377683 | orchestrator | 2026-03-28 01:02:27.377693 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 01:02:27.377702 | orchestrator | Saturday 28 March 2026 01:00:25 +0000 (0:00:00.268) 0:00:14.994 ******** 2026-03-28 01:02:27.377712 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.377722 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.377731 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.377741 | orchestrator | 2026-03-28 01:02:27.377751 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 01:02:27.377760 | orchestrator | Saturday 28 March 2026 01:00:25 +0000 (0:00:00.318) 0:00:15.313 ******** 2026-03-28 01:02:27.377770 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.377780 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.377789 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.377805 | orchestrator | 2026-03-28 01:02:27.377815 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 01:02:27.377825 | orchestrator | Saturday 28 March 2026 01:00:25 +0000 (0:00:00.359) 0:00:15.672 ******** 2026-03-28 01:02:27.377834 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.377844 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.377854 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.377863 | orchestrator | 2026-03-28 01:02:27.377873 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 01:02:27.377883 | orchestrator | Saturday 28 March 2026 01:00:26 +0000 (0:00:00.565) 0:00:16.238 ******** 2026-03-28 01:02:27.377892 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.377902 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.377912 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.377921 | orchestrator | 2026-03-28 01:02:27.377931 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 01:02:27.377941 | orchestrator | Saturday 28 March 2026 01:00:26 +0000 (0:00:00.353) 0:00:16.591 ******** 2026-03-28 01:02:27.377950 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.377960 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.377970 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.377979 | orchestrator | 2026-03-28 01:02:27.377989 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 01:02:27.377999 | orchestrator | Saturday 28 March 2026 01:00:26 +0000 (0:00:00.329) 0:00:16.921 ******** 2026-03-28 01:02:27.378057 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.378071 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.378081 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.378122 | orchestrator | 2026-03-28 01:02:27.378134 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 01:02:27.378144 | orchestrator | Saturday 28 March 2026 01:00:27 +0000 (0:00:00.343) 0:00:17.265 ******** 2026-03-28 01:02:27.378154 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.378164 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.378174 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.378183 | orchestrator | 2026-03-28 01:02:27.378193 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 01:02:27.378203 | orchestrator | Saturday 28 March 2026 01:00:27 +0000 (0:00:00.550) 0:00:17.815 ******** 2026-03-28 01:02:27.378215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e282229f--a8c2--5daa--9c69--6eb93429113b-osd--block--e282229f--a8c2--5daa--9c69--6eb93429113b', 'dm-uuid-LVM-nG28kqN3mbMtKOhRxNmvwhcmB0RqY3ewIJADuQ1rzsvyry0nnXrQl3TraZcM2dNR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1d415d19--3246--5675--b441--c36cba308c79-osd--block--1d415d19--3246--5675--b441--c36cba308c79', 'dm-uuid-LVM-TAOuoGIQrs87MNf2fFw5tIYBVvLNTD1Jx5dfK7NkPuGSRvrVkpDBjv95LS8LOg4E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part1', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part14', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part15', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part16', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de32c164--f4a0--5092--ad33--650515756f9d-osd--block--de32c164--f4a0--5092--ad33--650515756f9d', 'dm-uuid-LVM-jIb0bnEDbAUwmV2OhoIiGBx2S1hRa36gUlCLm4EMrr716UL3t1D0Y9yUv0cYe1k6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e282229f--a8c2--5daa--9c69--6eb93429113b-osd--block--e282229f--a8c2--5daa--9c69--6eb93429113b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Cb9qxz-e1pg-nAfB-Heaf-oN5a-7YP5-H3nqnD', 'scsi-0QEMU_QEMU_HARDDISK_9560503a-139c-4329-8ffd-1ea1e0c721e5', 'scsi-SQEMU_QEMU_HARDDISK_9560503a-139c-4329-8ffd-1ea1e0c721e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--65811f0f--7bf7--557a--9618--106707fc2899-osd--block--65811f0f--7bf7--557a--9618--106707fc2899', 'dm-uuid-LVM-cx6I8OBVjWE0SdizXW559kKB1PJIgnzMhy0AGwh0g3hhQGmCpJafxNwqcsh3yuUL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1d415d19--3246--5675--b441--c36cba308c79-osd--block--1d415d19--3246--5675--b441--c36cba308c79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lqMPFY-0IjU-H0fK-3muW-V5dl-YBJ4-LW0Z8v', 'scsi-0QEMU_QEMU_HARDDISK_64213c7d-5962-413c-aa45-2f60eed78f32', 'scsi-SQEMU_QEMU_HARDDISK_64213c7d-5962-413c-aa45-2f60eed78f32'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_94eace61-73f7-4993-ae2a-02303df71bb3', 'scsi-SQEMU_QEMU_HARDDISK_94eace61-73f7-4993-ae2a-02303df71bb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part1', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part14', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part15', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part16', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--de32c164--f4a0--5092--ad33--650515756f9d-osd--block--de32c164--f4a0--5092--ad33--650515756f9d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tOfppb-TpHr-M3P1-PFHX-OwRx-oSV7-eydvx7', 'scsi-0QEMU_QEMU_HARDDISK_4cb6368c-0066-4efd-8388-81f1557a02ca', 'scsi-SQEMU_QEMU_HARDDISK_4cb6368c-0066-4efd-8388-81f1557a02ca'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--65811f0f--7bf7--557a--9618--106707fc2899-osd--block--65811f0f--7bf7--557a--9618--106707fc2899'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Zr99fa-WjG2-7bae-3cH7-1JXW-6pj7-qnDezw', 'scsi-0QEMU_QEMU_HARDDISK_b9aebbdd-9418-41ff-9099-90b7dcb703f9', 'scsi-SQEMU_QEMU_HARDDISK_b9aebbdd-9418-41ff-9099-90b7dcb703f9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8ddcfbb-f935-4942-af25-8ac280f1cc67', 'scsi-SQEMU_QEMU_HARDDISK_f8ddcfbb-f935-4942-af25-8ac280f1cc67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378708 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.378718 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.378729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8b5a6aab--ec84--598a--adc7--d040a5844549-osd--block--8b5a6aab--ec84--598a--adc7--d040a5844549', 'dm-uuid-LVM-n3x6z0vISm2CJwPGychUi36foVrMCTsVwW5MkFJJ1X5L85t8TBOn3cafSp6hlzA8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe-osd--block--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe', 'dm-uuid-LVM-N6ATf3p9yGvylFwJ3f26f5zsR7t8BGZ4d6cT08TpBrY41fVjdTeLf0cdulABdWlf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378843 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:02:27.378895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part1', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part14', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part15', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part16', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8b5a6aab--ec84--598a--adc7--d040a5844549-osd--block--8b5a6aab--ec84--598a--adc7--d040a5844549'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3GzFBH-WypZ-MtIJ-87e6-rfO6-th7u-6qcT8D', 'scsi-0QEMU_QEMU_HARDDISK_d59a946d-61ee-4c80-a151-abde4d1a3094', 'scsi-SQEMU_QEMU_HARDDISK_d59a946d-61ee-4c80-a151-abde4d1a3094'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe-osd--block--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wLo2a2-D67r-EL0U-1qJK-1pU0-beyk-Ei8JS9', 'scsi-0QEMU_QEMU_HARDDISK_adec6741-41cb-49e2-9389-e6d1302151a0', 'scsi-SQEMU_QEMU_HARDDISK_adec6741-41cb-49e2-9389-e6d1302151a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e8f6ba-fcdd-41b8-9839-c0061159d97d', 'scsi-SQEMU_QEMU_HARDDISK_86e8f6ba-fcdd-41b8-9839-c0061159d97d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:02:27.378986 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.378996 | orchestrator | 2026-03-28 01:02:27.379006 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 01:02:27.379047 | orchestrator | Saturday 28 March 2026 01:00:28 +0000 (0:00:00.696) 0:00:18.512 ******** 2026-03-28 01:02:27.379058 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e282229f--a8c2--5daa--9c69--6eb93429113b-osd--block--e282229f--a8c2--5daa--9c69--6eb93429113b', 'dm-uuid-LVM-nG28kqN3mbMtKOhRxNmvwhcmB0RqY3ewIJADuQ1rzsvyry0nnXrQl3TraZcM2dNR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379078 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1d415d19--3246--5675--b441--c36cba308c79-osd--block--1d415d19--3246--5675--b441--c36cba308c79', 'dm-uuid-LVM-TAOuoGIQrs87MNf2fFw5tIYBVvLNTD1Jx5dfK7NkPuGSRvrVkpDBjv95LS8LOg4E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379089 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379137 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379168 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379185 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379200 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379221 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379240 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part1', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part14', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part15', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part16', 'scsi-SQEMU_QEMU_HARDDISK_a4f66dec-4fd4-432f-b746-29e54df03c22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379267 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de32c164--f4a0--5092--ad33--650515756f9d-osd--block--de32c164--f4a0--5092--ad33--650515756f9d', 'dm-uuid-LVM-jIb0bnEDbAUwmV2OhoIiGBx2S1hRa36gUlCLm4EMrr716UL3t1D0Y9yUv0cYe1k6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379278 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e282229f--a8c2--5daa--9c69--6eb93429113b-osd--block--e282229f--a8c2--5daa--9c69--6eb93429113b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Cb9qxz-e1pg-nAfB-Heaf-oN5a-7YP5-H3nqnD', 'scsi-0QEMU_QEMU_HARDDISK_9560503a-139c-4329-8ffd-1ea1e0c721e5', 'scsi-SQEMU_QEMU_HARDDISK_9560503a-139c-4329-8ffd-1ea1e0c721e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379289 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--65811f0f--7bf7--557a--9618--106707fc2899-osd--block--65811f0f--7bf7--557a--9618--106707fc2899', 'dm-uuid-LVM-cx6I8OBVjWE0SdizXW559kKB1PJIgnzMhy0AGwh0g3hhQGmCpJafxNwqcsh3yuUL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379305 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1d415d19--3246--5675--b441--c36cba308c79-osd--block--1d415d19--3246--5675--b441--c36cba308c79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lqMPFY-0IjU-H0fK-3muW-V5dl-YBJ4-LW0Z8v', 'scsi-0QEMU_QEMU_HARDDISK_64213c7d-5962-413c-aa45-2f60eed78f32', 'scsi-SQEMU_QEMU_HARDDISK_64213c7d-5962-413c-aa45-2f60eed78f32'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379322 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379338 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_94eace61-73f7-4993-ae2a-02303df71bb3', 'scsi-SQEMU_QEMU_HARDDISK_94eace61-73f7-4993-ae2a-02303df71bb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379348 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379369 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379387 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379404 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379419 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379429 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379439 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.379449 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379467 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part1', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part14', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part15', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part16', 'scsi-SQEMU_QEMU_HARDDISK_501feac0-2064-4a35-a9ff-661eec37e0e7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379489 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--de32c164--f4a0--5092--ad33--650515756f9d-osd--block--de32c164--f4a0--5092--ad33--650515756f9d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tOfppb-TpHr-M3P1-PFHX-OwRx-oSV7-eydvx7', 'scsi-0QEMU_QEMU_HARDDISK_4cb6368c-0066-4efd-8388-81f1557a02ca', 'scsi-SQEMU_QEMU_HARDDISK_4cb6368c-0066-4efd-8388-81f1557a02ca'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379500 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--65811f0f--7bf7--557a--9618--106707fc2899-osd--block--65811f0f--7bf7--557a--9618--106707fc2899'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Zr99fa-WjG2-7bae-3cH7-1JXW-6pj7-qnDezw', 'scsi-0QEMU_QEMU_HARDDISK_b9aebbdd-9418-41ff-9099-90b7dcb703f9', 'scsi-SQEMU_QEMU_HARDDISK_b9aebbdd-9418-41ff-9099-90b7dcb703f9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379510 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8ddcfbb-f935-4942-af25-8ac280f1cc67', 'scsi-SQEMU_QEMU_HARDDISK_f8ddcfbb-f935-4942-af25-8ac280f1cc67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379526 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379542 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.379552 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8b5a6aab--ec84--598a--adc7--d040a5844549-osd--block--8b5a6aab--ec84--598a--adc7--d040a5844549', 'dm-uuid-LVM-n3x6z0vISm2CJwPGychUi36foVrMCTsVwW5MkFJJ1X5L85t8TBOn3cafSp6hlzA8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379566 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe-osd--block--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe', 'dm-uuid-LVM-N6ATf3p9yGvylFwJ3f26f5zsR7t8BGZ4d6cT08TpBrY41fVjdTeLf0cdulABdWlf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379577 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379587 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379597 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379620 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379631 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379656 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379667 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part1', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part14', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part15', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part16', 'scsi-SQEMU_QEMU_HARDDISK_220b4e5a-a8a4-4fd5-bea1-dfbb1f989a01-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379706 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8b5a6aab--ec84--598a--adc7--d040a5844549-osd--block--8b5a6aab--ec84--598a--adc7--d040a5844549'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3GzFBH-WypZ-MtIJ-87e6-rfO6-th7u-6qcT8D', 'scsi-0QEMU_QEMU_HARDDISK_d59a946d-61ee-4c80-a151-abde4d1a3094', 'scsi-SQEMU_QEMU_HARDDISK_d59a946d-61ee-4c80-a151-abde4d1a3094'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379717 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe-osd--block--02fe8db3--ee90--5f59--9f4e--fa58d6febfbe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wLo2a2-D67r-EL0U-1qJK-1pU0-beyk-Ei8JS9', 'scsi-0QEMU_QEMU_HARDDISK_adec6741-41cb-49e2-9389-e6d1302151a0', 'scsi-SQEMU_QEMU_HARDDISK_adec6741-41cb-49e2-9389-e6d1302151a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379727 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e8f6ba-fcdd-41b8-9839-c0061159d97d', 'scsi-SQEMU_QEMU_HARDDISK_86e8f6ba-fcdd-41b8-9839-c0061159d97d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:02:27.379758 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.379768 | orchestrator | 2026-03-28 01:02:27.379778 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 01:02:27.379788 | orchestrator | Saturday 28 March 2026 01:00:29 +0000 (0:00:00.699) 0:00:19.212 ******** 2026-03-28 01:02:27.379798 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.379807 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.379817 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.379826 | orchestrator | 2026-03-28 01:02:27.379836 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 01:02:27.379846 | orchestrator | Saturday 28 March 2026 01:00:29 +0000 (0:00:00.667) 0:00:19.879 ******** 2026-03-28 01:02:27.379855 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.379865 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.379875 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.379885 | orchestrator | 2026-03-28 01:02:27.379895 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 01:02:27.379905 | orchestrator | Saturday 28 March 2026 01:00:30 +0000 (0:00:00.521) 0:00:20.401 ******** 2026-03-28 01:02:27.379915 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.379932 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.379948 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.379963 | orchestrator | 2026-03-28 01:02:27.379984 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 01:02:27.380036 | orchestrator | Saturday 28 March 2026 01:00:31 +0000 (0:00:00.659) 0:00:21.060 ******** 2026-03-28 01:02:27.380052 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.380067 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.380083 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.380099 | orchestrator | 2026-03-28 01:02:27.380116 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 01:02:27.380132 | orchestrator | Saturday 28 March 2026 01:00:31 +0000 (0:00:00.306) 0:00:21.366 ******** 2026-03-28 01:02:27.380148 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.380164 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.380186 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.380205 | orchestrator | 2026-03-28 01:02:27.380221 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 01:02:27.380236 | orchestrator | Saturday 28 March 2026 01:00:31 +0000 (0:00:00.516) 0:00:21.883 ******** 2026-03-28 01:02:27.380251 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.380265 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.380281 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.380296 | orchestrator | 2026-03-28 01:02:27.380312 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 01:02:27.380326 | orchestrator | Saturday 28 March 2026 01:00:32 +0000 (0:00:00.735) 0:00:22.619 ******** 2026-03-28 01:02:27.380363 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-28 01:02:27.380380 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-28 01:02:27.380396 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-28 01:02:27.380412 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-28 01:02:27.380429 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-28 01:02:27.380445 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-28 01:02:27.380461 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-28 01:02:27.380478 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-28 01:02:27.380495 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-28 01:02:27.380509 | orchestrator | 2026-03-28 01:02:27.380525 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 01:02:27.380542 | orchestrator | Saturday 28 March 2026 01:00:33 +0000 (0:00:00.898) 0:00:23.518 ******** 2026-03-28 01:02:27.380552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 01:02:27.380562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 01:02:27.380572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 01:02:27.380582 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.380591 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 01:02:27.380601 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 01:02:27.380611 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 01:02:27.380620 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.380630 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 01:02:27.380639 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 01:02:27.380648 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 01:02:27.380658 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.380668 | orchestrator | 2026-03-28 01:02:27.380678 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 01:02:27.380688 | orchestrator | Saturday 28 March 2026 01:00:33 +0000 (0:00:00.382) 0:00:23.900 ******** 2026-03-28 01:02:27.380698 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:02:27.380708 | orchestrator | 2026-03-28 01:02:27.380718 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 01:02:27.380730 | orchestrator | Saturday 28 March 2026 01:00:34 +0000 (0:00:00.765) 0:00:24.666 ******** 2026-03-28 01:02:27.380749 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.380759 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.380769 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.380779 | orchestrator | 2026-03-28 01:02:27.380788 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 01:02:27.380798 | orchestrator | Saturday 28 March 2026 01:00:35 +0000 (0:00:00.365) 0:00:25.031 ******** 2026-03-28 01:02:27.380808 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.380817 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.380827 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.380836 | orchestrator | 2026-03-28 01:02:27.380846 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 01:02:27.380856 | orchestrator | Saturday 28 March 2026 01:00:35 +0000 (0:00:00.332) 0:00:25.364 ******** 2026-03-28 01:02:27.380866 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.380875 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.380885 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:02:27.380894 | orchestrator | 2026-03-28 01:02:27.380904 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 01:02:27.380922 | orchestrator | Saturday 28 March 2026 01:00:35 +0000 (0:00:00.330) 0:00:25.694 ******** 2026-03-28 01:02:27.380932 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.380942 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.380951 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.380961 | orchestrator | 2026-03-28 01:02:27.380971 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 01:02:27.380980 | orchestrator | Saturday 28 March 2026 01:00:36 +0000 (0:00:00.672) 0:00:26.367 ******** 2026-03-28 01:02:27.380990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:02:27.381000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:02:27.381043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:02:27.381061 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.381072 | orchestrator | 2026-03-28 01:02:27.381082 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 01:02:27.381092 | orchestrator | Saturday 28 March 2026 01:00:36 +0000 (0:00:00.414) 0:00:26.781 ******** 2026-03-28 01:02:27.381101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:02:27.381111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:02:27.381120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:02:27.381130 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.381140 | orchestrator | 2026-03-28 01:02:27.381149 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 01:02:27.381159 | orchestrator | Saturday 28 March 2026 01:00:37 +0000 (0:00:00.403) 0:00:27.185 ******** 2026-03-28 01:02:27.381169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:02:27.381178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:02:27.381188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:02:27.381198 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.381208 | orchestrator | 2026-03-28 01:02:27.381217 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 01:02:27.381228 | orchestrator | Saturday 28 March 2026 01:00:37 +0000 (0:00:00.393) 0:00:27.579 ******** 2026-03-28 01:02:27.381237 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:02:27.381247 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:02:27.381257 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:02:27.381266 | orchestrator | 2026-03-28 01:02:27.381276 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 01:02:27.381286 | orchestrator | Saturday 28 March 2026 01:00:37 +0000 (0:00:00.349) 0:00:27.928 ******** 2026-03-28 01:02:27.381295 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 01:02:27.381305 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 01:02:27.381315 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 01:02:27.381325 | orchestrator | 2026-03-28 01:02:27.381335 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 01:02:27.381345 | orchestrator | Saturday 28 March 2026 01:00:38 +0000 (0:00:00.524) 0:00:28.453 ******** 2026-03-28 01:02:27.381354 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 01:02:27.381364 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:02:27.381374 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:02:27.381384 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 01:02:27.381394 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 01:02:27.381403 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 01:02:27.381413 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 01:02:27.381423 | orchestrator | 2026-03-28 01:02:27.381433 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 01:02:27.381449 | orchestrator | Saturday 28 March 2026 01:00:39 +0000 (0:00:01.102) 0:00:29.556 ******** 2026-03-28 01:02:27.381458 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 01:02:27.381468 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:02:27.381479 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:02:27.381489 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 01:02:27.381498 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 01:02:27.381508 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 01:02:27.381524 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 01:02:27.381535 | orchestrator | 2026-03-28 01:02:27.381545 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-28 01:02:27.381554 | orchestrator | Saturday 28 March 2026 01:00:41 +0000 (0:00:02.148) 0:00:31.705 ******** 2026-03-28 01:02:27.381564 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:02:27.381574 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:02:27.381584 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-28 01:02:27.381594 | orchestrator | 2026-03-28 01:02:27.381603 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-28 01:02:27.381613 | orchestrator | Saturday 28 March 2026 01:00:42 +0000 (0:00:00.398) 0:00:32.103 ******** 2026-03-28 01:02:27.381624 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 01:02:27.381636 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 01:02:27.381651 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 01:02:27.381662 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 01:02:27.381672 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 01:02:27.381683 | orchestrator | 2026-03-28 01:02:27.381692 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-28 01:02:27.381702 | orchestrator | Saturday 28 March 2026 01:01:28 +0000 (0:00:46.427) 0:01:18.531 ******** 2026-03-28 01:02:27.381712 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381722 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381732 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381741 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381751 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381768 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381778 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-28 01:02:27.381788 | orchestrator | 2026-03-28 01:02:27.381797 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-28 01:02:27.381807 | orchestrator | Saturday 28 March 2026 01:01:53 +0000 (0:00:25.018) 0:01:43.549 ******** 2026-03-28 01:02:27.381817 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381827 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381837 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381846 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381856 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381866 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381875 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 01:02:27.381885 | orchestrator | 2026-03-28 01:02:27.381895 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-28 01:02:27.381904 | orchestrator | Saturday 28 March 2026 01:02:06 +0000 (0:00:12.746) 0:01:56.295 ******** 2026-03-28 01:02:27.381914 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381924 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:02:27.381934 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:02:27.381943 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381954 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:02:27.381969 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:02:27.381980 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.381990 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:02:27.382000 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:02:27.382079 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.382104 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:02:27.382130 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:02:27.382146 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.382161 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:02:27.382177 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:02:27.382193 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:02:27.382208 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:02:27.382225 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:02:27.382243 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-28 01:02:27.382259 | orchestrator | 2026-03-28 01:02:27.382279 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:02:27.382302 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-28 01:02:27.382315 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-28 01:02:27.382335 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-28 01:02:27.382345 | orchestrator | 2026-03-28 01:02:27.382355 | orchestrator | 2026-03-28 01:02:27.382365 | orchestrator | 2026-03-28 01:02:27.382374 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:02:27.382384 | orchestrator | Saturday 28 March 2026 01:02:24 +0000 (0:00:18.357) 0:02:14.652 ******** 2026-03-28 01:02:27.382393 | orchestrator | =============================================================================== 2026-03-28 01:02:27.382403 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.43s 2026-03-28 01:02:27.382413 | orchestrator | generate keys ---------------------------------------------------------- 25.02s 2026-03-28 01:02:27.382423 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.36s 2026-03-28 01:02:27.382432 | orchestrator | get keys from monitors ------------------------------------------------- 12.75s 2026-03-28 01:02:27.382442 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.20s 2026-03-28 01:02:27.382452 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.15s 2026-03-28 01:02:27.382461 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.83s 2026-03-28 01:02:27.382471 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.10s 2026-03-28 01:02:27.382481 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.93s 2026-03-28 01:02:27.382490 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.92s 2026-03-28 01:02:27.382500 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.90s 2026-03-28 01:02:27.382510 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.77s 2026-03-28 01:02:27.382520 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.74s 2026-03-28 01:02:27.382530 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.73s 2026-03-28 01:02:27.382539 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.71s 2026-03-28 01:02:27.382549 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.70s 2026-03-28 01:02:27.382559 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.70s 2026-03-28 01:02:27.382569 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2026-03-28 01:02:27.382578 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.67s 2026-03-28 01:02:27.382588 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.67s 2026-03-28 01:02:27.382598 | orchestrator | 2026-03-28 01:02:27 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:27.382608 | orchestrator | 2026-03-28 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:30.447673 | orchestrator | 2026-03-28 01:02:30 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state STARTED 2026-03-28 01:02:30.452452 | orchestrator | 2026-03-28 01:02:30 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:30.454648 | orchestrator | 2026-03-28 01:02:30 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:30.454705 | orchestrator | 2026-03-28 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:33.502461 | orchestrator | 2026-03-28 01:02:33 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state STARTED 2026-03-28 01:02:33.502573 | orchestrator | 2026-03-28 01:02:33 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:33.503306 | orchestrator | 2026-03-28 01:02:33 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:33.503374 | orchestrator | 2026-03-28 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:36.559883 | orchestrator | 2026-03-28 01:02:36 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state STARTED 2026-03-28 01:02:36.563466 | orchestrator | 2026-03-28 01:02:36 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:36.566297 | orchestrator | 2026-03-28 01:02:36 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:36.566376 | orchestrator | 2026-03-28 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:39.618193 | orchestrator | 2026-03-28 01:02:39 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state STARTED 2026-03-28 01:02:39.619971 | orchestrator | 2026-03-28 01:02:39 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:39.621833 | orchestrator | 2026-03-28 01:02:39 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:39.621892 | orchestrator | 2026-03-28 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:42.670362 | orchestrator | 2026-03-28 01:02:42 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state STARTED 2026-03-28 01:02:42.675254 | orchestrator | 2026-03-28 01:02:42 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:42.678128 | orchestrator | 2026-03-28 01:02:42 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:42.678576 | orchestrator | 2026-03-28 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:45.741794 | orchestrator | 2026-03-28 01:02:45 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state STARTED 2026-03-28 01:02:45.741886 | orchestrator | 2026-03-28 01:02:45 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:45.741899 | orchestrator | 2026-03-28 01:02:45 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:45.741910 | orchestrator | 2026-03-28 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:48.785479 | orchestrator | 2026-03-28 01:02:48 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state STARTED 2026-03-28 01:02:48.787241 | orchestrator | 2026-03-28 01:02:48 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:48.788766 | orchestrator | 2026-03-28 01:02:48 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:48.789315 | orchestrator | 2026-03-28 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:51.841549 | orchestrator | 2026-03-28 01:02:51 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state STARTED 2026-03-28 01:02:51.844800 | orchestrator | 2026-03-28 01:02:51 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:51.845341 | orchestrator | 2026-03-28 01:02:51 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:51.845570 | orchestrator | 2026-03-28 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:54.889921 | orchestrator | 2026-03-28 01:02:54 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state STARTED 2026-03-28 01:02:54.893634 | orchestrator | 2026-03-28 01:02:54 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:54.895846 | orchestrator | 2026-03-28 01:02:54 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:54.896157 | orchestrator | 2026-03-28 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:57.946183 | orchestrator | 2026-03-28 01:02:57 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state STARTED 2026-03-28 01:02:57.947309 | orchestrator | 2026-03-28 01:02:57 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:02:57.949243 | orchestrator | 2026-03-28 01:02:57 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:02:57.949291 | orchestrator | 2026-03-28 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:00.996123 | orchestrator | 2026-03-28 01:03:00 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state STARTED 2026-03-28 01:03:00.999125 | orchestrator | 2026-03-28 01:03:00 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:03:01.001948 | orchestrator | 2026-03-28 01:03:01 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:01.002186 | orchestrator | 2026-03-28 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:04.058143 | orchestrator | 2026-03-28 01:03:04 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state STARTED 2026-03-28 01:03:04.066142 | orchestrator | 2026-03-28 01:03:04 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state STARTED 2026-03-28 01:03:04.066542 | orchestrator | 2026-03-28 01:03:04 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:04.066786 | orchestrator | 2026-03-28 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:07.112335 | orchestrator | 2026-03-28 01:03:07 | INFO  | Task e4ab59d6-cb6f-4e38-aadb-c81f4008ecb1 is in state SUCCESS 2026-03-28 01:03:07.113427 | orchestrator | 2026-03-28 01:03:07 | INFO  | Task ce8a0f50-3682-4c26-ba6a-8e0a9330afad is in state SUCCESS 2026-03-28 01:03:07.114960 | orchestrator | 2026-03-28 01:03:07.115025 | orchestrator | 2026-03-28 01:03:07.115034 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-28 01:03:07.115042 | orchestrator | 2026-03-28 01:03:07.115048 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-28 01:03:07.115055 | orchestrator | Saturday 28 March 2026 01:02:29 +0000 (0:00:00.184) 0:00:00.184 ******** 2026-03-28 01:03:07.115063 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-28 01:03:07.115071 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 01:03:07.115077 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 01:03:07.115083 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:03:07.115090 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 01:03:07.115096 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-28 01:03:07.115102 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-28 01:03:07.115109 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-28 01:03:07.115186 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-28 01:03:07.115196 | orchestrator | 2026-03-28 01:03:07.115204 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-28 01:03:07.115211 | orchestrator | Saturday 28 March 2026 01:02:34 +0000 (0:00:04.784) 0:00:04.968 ******** 2026-03-28 01:03:07.115218 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-28 01:03:07.115451 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 01:03:07.115463 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 01:03:07.115470 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:03:07.115477 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 01:03:07.115485 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-28 01:03:07.115492 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-28 01:03:07.115499 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-28 01:03:07.115507 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-28 01:03:07.115514 | orchestrator | 2026-03-28 01:03:07.115521 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-28 01:03:07.115528 | orchestrator | Saturday 28 March 2026 01:02:38 +0000 (0:00:04.489) 0:00:09.458 ******** 2026-03-28 01:03:07.115535 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 01:03:07.115542 | orchestrator | 2026-03-28 01:03:07.115549 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-28 01:03:07.115556 | orchestrator | Saturday 28 March 2026 01:02:39 +0000 (0:00:01.036) 0:00:10.495 ******** 2026-03-28 01:03:07.115563 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-28 01:03:07.115570 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-28 01:03:07.115577 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-28 01:03:07.115583 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:03:07.115590 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-28 01:03:07.115597 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-28 01:03:07.115604 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-28 01:03:07.115611 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-28 01:03:07.115618 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-28 01:03:07.115625 | orchestrator | 2026-03-28 01:03:07.115632 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-28 01:03:07.115638 | orchestrator | Saturday 28 March 2026 01:02:54 +0000 (0:00:14.592) 0:00:25.087 ******** 2026-03-28 01:03:07.115645 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-28 01:03:07.115652 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-28 01:03:07.115659 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-28 01:03:07.115665 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-28 01:03:07.115695 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-28 01:03:07.115703 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-28 01:03:07.115709 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-28 01:03:07.115715 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-28 01:03:07.115721 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-28 01:03:07.115735 | orchestrator | 2026-03-28 01:03:07.115741 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-28 01:03:07.115746 | orchestrator | Saturday 28 March 2026 01:02:57 +0000 (0:00:03.355) 0:00:28.443 ******** 2026-03-28 01:03:07.115753 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-28 01:03:07.115759 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-28 01:03:07.115765 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-28 01:03:07.115771 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:03:07.115777 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-28 01:03:07.115782 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-28 01:03:07.115790 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-28 01:03:07.115796 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-28 01:03:07.115802 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-28 01:03:07.115809 | orchestrator | 2026-03-28 01:03:07.115816 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:03:07.115822 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:07.115830 | orchestrator | 2026-03-28 01:03:07.115836 | orchestrator | 2026-03-28 01:03:07.115843 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:03:07.115850 | orchestrator | Saturday 28 March 2026 01:03:05 +0000 (0:00:07.369) 0:00:35.813 ******** 2026-03-28 01:03:07.115857 | orchestrator | =============================================================================== 2026-03-28 01:03:07.115863 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.59s 2026-03-28 01:03:07.115870 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.37s 2026-03-28 01:03:07.115877 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.78s 2026-03-28 01:03:07.115883 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.49s 2026-03-28 01:03:07.115890 | orchestrator | Check if target directories exist --------------------------------------- 3.36s 2026-03-28 01:03:07.115897 | orchestrator | Create share directory -------------------------------------------------- 1.04s 2026-03-28 01:03:07.115903 | orchestrator | 2026-03-28 01:03:07.116109 | orchestrator | 2026-03-28 01:03:07.116118 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:03:07.116125 | orchestrator | 2026-03-28 01:03:07.116131 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:03:07.116138 | orchestrator | Saturday 28 March 2026 01:01:13 +0000 (0:00:00.268) 0:00:00.268 ******** 2026-03-28 01:03:07.116145 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:07.116153 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:07.116159 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:07.116166 | orchestrator | 2026-03-28 01:03:07.116172 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:03:07.116179 | orchestrator | Saturday 28 March 2026 01:01:13 +0000 (0:00:00.336) 0:00:00.605 ******** 2026-03-28 01:03:07.116186 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-28 01:03:07.116193 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-28 01:03:07.116199 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-28 01:03:07.116205 | orchestrator | 2026-03-28 01:03:07.116212 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-28 01:03:07.116218 | orchestrator | 2026-03-28 01:03:07.116225 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:03:07.116232 | orchestrator | Saturday 28 March 2026 01:01:14 +0000 (0:00:00.622) 0:00:01.227 ******** 2026-03-28 01:03:07.116244 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:03:07.116249 | orchestrator | 2026-03-28 01:03:07.116253 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-28 01:03:07.116257 | orchestrator | Saturday 28 March 2026 01:01:14 +0000 (0:00:00.491) 0:00:01.719 ******** 2026-03-28 01:03:07.116283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:03:07.116291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:03:07.116308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:03:07.116314 | orchestrator | 2026-03-28 01:03:07.116318 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-28 01:03:07.116322 | orchestrator | Saturday 28 March 2026 01:01:16 +0000 (0:00:01.380) 0:00:03.100 ******** 2026-03-28 01:03:07.116326 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:07.116330 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:07.116334 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:07.116338 | orchestrator | 2026-03-28 01:03:07.116342 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:03:07.116346 | orchestrator | Saturday 28 March 2026 01:01:16 +0000 (0:00:00.495) 0:00:03.596 ******** 2026-03-28 01:03:07.116351 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-28 01:03:07.116355 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-28 01:03:07.116359 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-28 01:03:07.116363 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-28 01:03:07.116371 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-28 01:03:07.116375 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-28 01:03:07.116379 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-28 01:03:07.116383 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-28 01:03:07.116387 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-28 01:03:07.116391 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-28 01:03:07.116395 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-28 01:03:07.116399 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-28 01:03:07.116402 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-28 01:03:07.116406 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-28 01:03:07.116410 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-28 01:03:07.116414 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-28 01:03:07.116418 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-28 01:03:07.116422 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-28 01:03:07.116426 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-28 01:03:07.116430 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-28 01:03:07.116434 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-28 01:03:07.116447 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-28 01:03:07.116454 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-28 01:03:07.116459 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-28 01:03:07.116466 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-28 01:03:07.116474 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-28 01:03:07.116481 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-28 01:03:07.116487 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-28 01:03:07.116494 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-28 01:03:07.116501 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-28 01:03:07.116507 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-28 01:03:07.116514 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-28 01:03:07.116520 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-28 01:03:07.116528 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-28 01:03:07.116532 | orchestrator | 2026-03-28 01:03:07.116536 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:03:07.116540 | orchestrator | Saturday 28 March 2026 01:01:17 +0000 (0:00:00.811) 0:00:04.407 ******** 2026-03-28 01:03:07.116544 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:07.116548 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:07.116553 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:07.116557 | orchestrator | 2026-03-28 01:03:07.116561 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:03:07.116565 | orchestrator | Saturday 28 March 2026 01:01:17 +0000 (0:00:00.322) 0:00:04.730 ******** 2026-03-28 01:03:07.116569 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.116573 | orchestrator | 2026-03-28 01:03:07.116577 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:03:07.116581 | orchestrator | Saturday 28 March 2026 01:01:17 +0000 (0:00:00.138) 0:00:04.868 ******** 2026-03-28 01:03:07.116585 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.116589 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.116593 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.116597 | orchestrator | 2026-03-28 01:03:07.116601 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:03:07.116605 | orchestrator | Saturday 28 March 2026 01:01:18 +0000 (0:00:00.505) 0:00:05.374 ******** 2026-03-28 01:03:07.116609 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:07.116613 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:07.116617 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:07.116621 | orchestrator | 2026-03-28 01:03:07.116625 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:03:07.116629 | orchestrator | Saturday 28 March 2026 01:01:18 +0000 (0:00:00.333) 0:00:05.708 ******** 2026-03-28 01:03:07.116633 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.116637 | orchestrator | 2026-03-28 01:03:07.116641 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:03:07.116645 | orchestrator | Saturday 28 March 2026 01:01:18 +0000 (0:00:00.166) 0:00:05.874 ******** 2026-03-28 01:03:07.116649 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.116653 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.116657 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.116660 | orchestrator | 2026-03-28 01:03:07.116664 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:03:07.116668 | orchestrator | Saturday 28 March 2026 01:01:19 +0000 (0:00:00.358) 0:00:06.233 ******** 2026-03-28 01:03:07.116673 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:07.116677 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:07.116681 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:07.116685 | orchestrator | 2026-03-28 01:03:07.116689 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:03:07.116693 | orchestrator | Saturday 28 March 2026 01:01:19 +0000 (0:00:00.340) 0:00:06.573 ******** 2026-03-28 01:03:07.116697 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.116701 | orchestrator | 2026-03-28 01:03:07.116705 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:03:07.116709 | orchestrator | Saturday 28 March 2026 01:01:20 +0000 (0:00:00.363) 0:00:06.937 ******** 2026-03-28 01:03:07.116713 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.116717 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.116721 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.116725 | orchestrator | 2026-03-28 01:03:07.116735 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:03:07.116739 | orchestrator | Saturday 28 March 2026 01:01:20 +0000 (0:00:00.332) 0:00:07.269 ******** 2026-03-28 01:03:07.116743 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:07.116751 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:07.116755 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:07.116759 | orchestrator | 2026-03-28 01:03:07.116763 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:03:07.116767 | orchestrator | Saturday 28 March 2026 01:01:20 +0000 (0:00:00.356) 0:00:07.626 ******** 2026-03-28 01:03:07.116771 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.116775 | orchestrator | 2026-03-28 01:03:07.116779 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:03:07.116783 | orchestrator | Saturday 28 March 2026 01:01:20 +0000 (0:00:00.127) 0:00:07.753 ******** 2026-03-28 01:03:07.116787 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.116791 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.116795 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.116799 | orchestrator | 2026-03-28 01:03:07.116803 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:03:07.116807 | orchestrator | Saturday 28 March 2026 01:01:21 +0000 (0:00:00.307) 0:00:08.061 ******** 2026-03-28 01:03:07.116811 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:07.116815 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:07.116819 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:07.116823 | orchestrator | 2026-03-28 01:03:07.116827 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:03:07.116831 | orchestrator | Saturday 28 March 2026 01:01:21 +0000 (0:00:00.531) 0:00:08.592 ******** 2026-03-28 01:03:07.116835 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.116839 | orchestrator | 2026-03-28 01:03:07.116843 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:03:07.116847 | orchestrator | Saturday 28 March 2026 01:01:21 +0000 (0:00:00.138) 0:00:08.730 ******** 2026-03-28 01:03:07.116851 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.116855 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.116859 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.116863 | orchestrator | 2026-03-28 01:03:07.116867 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:03:07.116871 | orchestrator | Saturday 28 March 2026 01:01:22 +0000 (0:00:00.320) 0:00:09.051 ******** 2026-03-28 01:03:07.116875 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:07.116879 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:07.116884 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:07.116888 | orchestrator | 2026-03-28 01:03:07.116892 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:03:07.116896 | orchestrator | Saturday 28 March 2026 01:01:22 +0000 (0:00:00.369) 0:00:09.421 ******** 2026-03-28 01:03:07.116900 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.116904 | orchestrator | 2026-03-28 01:03:07.116908 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:03:07.116912 | orchestrator | Saturday 28 March 2026 01:01:22 +0000 (0:00:00.154) 0:00:09.576 ******** 2026-03-28 01:03:07.116916 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.116920 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.116924 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.116928 | orchestrator | 2026-03-28 01:03:07.116932 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:03:07.116936 | orchestrator | Saturday 28 March 2026 01:01:23 +0000 (0:00:00.314) 0:00:09.890 ******** 2026-03-28 01:03:07.116940 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:07.116944 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:07.116948 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:07.116953 | orchestrator | 2026-03-28 01:03:07.116957 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:03:07.116961 | orchestrator | Saturday 28 March 2026 01:01:23 +0000 (0:00:00.623) 0:00:10.514 ******** 2026-03-28 01:03:07.116981 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.116986 | orchestrator | 2026-03-28 01:03:07.116990 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:03:07.117002 | orchestrator | Saturday 28 March 2026 01:01:23 +0000 (0:00:00.154) 0:00:10.668 ******** 2026-03-28 01:03:07.117006 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.117010 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.117014 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.117018 | orchestrator | 2026-03-28 01:03:07.117022 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:03:07.117026 | orchestrator | Saturday 28 March 2026 01:01:24 +0000 (0:00:00.332) 0:00:11.001 ******** 2026-03-28 01:03:07.117030 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:07.117034 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:07.117038 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:07.117042 | orchestrator | 2026-03-28 01:03:07.117046 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:03:07.117050 | orchestrator | Saturday 28 March 2026 01:01:24 +0000 (0:00:00.322) 0:00:11.324 ******** 2026-03-28 01:03:07.117054 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.117058 | orchestrator | 2026-03-28 01:03:07.117062 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:03:07.117066 | orchestrator | Saturday 28 March 2026 01:01:24 +0000 (0:00:00.145) 0:00:11.469 ******** 2026-03-28 01:03:07.117070 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.117074 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.117078 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.117082 | orchestrator | 2026-03-28 01:03:07.117086 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:03:07.117090 | orchestrator | Saturday 28 March 2026 01:01:25 +0000 (0:00:00.532) 0:00:12.001 ******** 2026-03-28 01:03:07.117094 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:07.117098 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:07.117102 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:07.117106 | orchestrator | 2026-03-28 01:03:07.117111 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:03:07.117115 | orchestrator | Saturday 28 March 2026 01:01:25 +0000 (0:00:00.398) 0:00:12.400 ******** 2026-03-28 01:03:07.117124 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.117128 | orchestrator | 2026-03-28 01:03:07.117133 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:03:07.117137 | orchestrator | Saturday 28 March 2026 01:01:25 +0000 (0:00:00.176) 0:00:12.577 ******** 2026-03-28 01:03:07.117141 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.117144 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.117148 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.117152 | orchestrator | 2026-03-28 01:03:07.117156 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:03:07.117160 | orchestrator | Saturday 28 March 2026 01:01:26 +0000 (0:00:00.376) 0:00:12.954 ******** 2026-03-28 01:03:07.117164 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:07.117168 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:07.117173 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:07.117177 | orchestrator | 2026-03-28 01:03:07.117180 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:03:07.117184 | orchestrator | Saturday 28 March 2026 01:01:26 +0000 (0:00:00.355) 0:00:13.309 ******** 2026-03-28 01:03:07.117188 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.117192 | orchestrator | 2026-03-28 01:03:07.117196 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:03:07.117200 | orchestrator | Saturday 28 March 2026 01:01:26 +0000 (0:00:00.141) 0:00:13.451 ******** 2026-03-28 01:03:07.117204 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.117208 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.117212 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.117216 | orchestrator | 2026-03-28 01:03:07.117220 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-28 01:03:07.117229 | orchestrator | Saturday 28 March 2026 01:01:27 +0000 (0:00:00.527) 0:00:13.978 ******** 2026-03-28 01:03:07.117233 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:03:07.117237 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:07.117241 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:03:07.117245 | orchestrator | 2026-03-28 01:03:07.117249 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-28 01:03:07.117254 | orchestrator | Saturday 28 March 2026 01:01:28 +0000 (0:00:01.765) 0:00:15.743 ******** 2026-03-28 01:03:07.117258 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-28 01:03:07.117262 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-28 01:03:07.117266 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-28 01:03:07.117270 | orchestrator | 2026-03-28 01:03:07.117274 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-28 01:03:07.117278 | orchestrator | Saturday 28 March 2026 01:01:31 +0000 (0:00:02.301) 0:00:18.044 ******** 2026-03-28 01:03:07.117282 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-28 01:03:07.117286 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-28 01:03:07.117290 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-28 01:03:07.117294 | orchestrator | 2026-03-28 01:03:07.117298 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-28 01:03:07.117302 | orchestrator | Saturday 28 March 2026 01:01:33 +0000 (0:00:02.581) 0:00:20.626 ******** 2026-03-28 01:03:07.117306 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-28 01:03:07.117310 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-28 01:03:07.117314 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-28 01:03:07.117318 | orchestrator | 2026-03-28 01:03:07.117322 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-28 01:03:07.117326 | orchestrator | Saturday 28 March 2026 01:01:36 +0000 (0:00:02.279) 0:00:22.905 ******** 2026-03-28 01:03:07.117330 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.117334 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.117338 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.117342 | orchestrator | 2026-03-28 01:03:07.117346 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-28 01:03:07.117350 | orchestrator | Saturday 28 March 2026 01:01:36 +0000 (0:00:00.398) 0:00:23.304 ******** 2026-03-28 01:03:07.117354 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.117358 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.117362 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.117366 | orchestrator | 2026-03-28 01:03:07.117370 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:03:07.117374 | orchestrator | Saturday 28 March 2026 01:01:36 +0000 (0:00:00.318) 0:00:23.622 ******** 2026-03-28 01:03:07.117378 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:03:07.117382 | orchestrator | 2026-03-28 01:03:07.117386 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-28 01:03:07.117391 | orchestrator | Saturday 28 March 2026 01:01:37 +0000 (0:00:00.815) 0:00:24.438 ******** 2026-03-28 01:03:07.117402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:03:07.117414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:03:07.117429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:03:07.117434 | orchestrator | 2026-03-28 01:03:07.117438 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-28 01:03:07.117442 | orchestrator | Saturday 28 March 2026 01:01:39 +0000 (0:00:01.758) 0:00:26.196 ******** 2026-03-28 01:03:07.117453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:03:07.117464 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.117468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:03:07.117473 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.117484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:03:07.117492 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.117496 | orchestrator | 2026-03-28 01:03:07.117500 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-28 01:03:07.117504 | orchestrator | Saturday 28 March 2026 01:01:40 +0000 (0:00:00.698) 0:00:26.894 ******** 2026-03-28 01:03:07.117509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:03:07.117513 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.117523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:03:07.117532 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.117537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:03:07.117542 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.117549 | orchestrator | 2026-03-28 01:03:07.117553 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-28 01:03:07.117557 | orchestrator | Saturday 28 March 2026 01:01:40 +0000 (0:00:00.953) 0:00:27.847 ******** 2026-03-28 01:03:07.117567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:03:07.117572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:03:07.117599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:03:07.117604 | orchestrator | 2026-03-28 01:03:07.117608 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:03:07.117612 | orchestrator | Saturday 28 March 2026 01:01:42 +0000 (0:00:01.741) 0:00:29.589 ******** 2026-03-28 01:03:07.117616 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:07.117620 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:07.117624 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:07.117628 | orchestrator | 2026-03-28 01:03:07.117632 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:03:07.117636 | orchestrator | Saturday 28 March 2026 01:01:43 +0000 (0:00:00.377) 0:00:29.967 ******** 2026-03-28 01:03:07.117640 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:03:07.117644 | orchestrator | 2026-03-28 01:03:07.117648 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-28 01:03:07.117652 | orchestrator | Saturday 28 March 2026 01:01:43 +0000 (0:00:00.574) 0:00:30.541 ******** 2026-03-28 01:03:07.117656 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:07.117660 | orchestrator | 2026-03-28 01:03:07.117664 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-28 01:03:07.117668 | orchestrator | Saturday 28 March 2026 01:01:46 +0000 (0:00:02.697) 0:00:33.238 ******** 2026-03-28 01:03:07.117677 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:07.117681 | orchestrator | 2026-03-28 01:03:07.117685 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-28 01:03:07.117689 | orchestrator | Saturday 28 March 2026 01:01:49 +0000 (0:00:02.949) 0:00:36.188 ******** 2026-03-28 01:03:07.117693 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:07.117697 | orchestrator | 2026-03-28 01:03:07.117701 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-28 01:03:07.117705 | orchestrator | Saturday 28 March 2026 01:02:06 +0000 (0:00:17.554) 0:00:53.742 ******** 2026-03-28 01:03:07.117709 | orchestrator | 2026-03-28 01:03:07.117713 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-28 01:03:07.117717 | orchestrator | Saturday 28 March 2026 01:02:06 +0000 (0:00:00.069) 0:00:53.812 ******** 2026-03-28 01:03:07.117721 | orchestrator | 2026-03-28 01:03:07.117725 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-28 01:03:07.117728 | orchestrator | Saturday 28 March 2026 01:02:07 +0000 (0:00:00.072) 0:00:53.885 ******** 2026-03-28 01:03:07.117732 | orchestrator | 2026-03-28 01:03:07.117736 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-28 01:03:07.117740 | orchestrator | Saturday 28 March 2026 01:02:07 +0000 (0:00:00.073) 0:00:53.958 ******** 2026-03-28 01:03:07.117744 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:07.117748 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:03:07.117752 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:03:07.117756 | orchestrator | 2026-03-28 01:03:07.117760 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:03:07.117764 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 01:03:07.117773 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-28 01:03:07.117778 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-28 01:03:07.117782 | orchestrator | 2026-03-28 01:03:07.117786 | orchestrator | 2026-03-28 01:03:07.117790 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:03:07.117794 | orchestrator | Saturday 28 March 2026 01:03:05 +0000 (0:00:58.561) 0:01:52.520 ******** 2026-03-28 01:03:07.117798 | orchestrator | =============================================================================== 2026-03-28 01:03:07.117802 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.56s 2026-03-28 01:03:07.117806 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.55s 2026-03-28 01:03:07.117810 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.95s 2026-03-28 01:03:07.117814 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.70s 2026-03-28 01:03:07.117818 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.58s 2026-03-28 01:03:07.117822 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.30s 2026-03-28 01:03:07.117826 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.28s 2026-03-28 01:03:07.117830 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.77s 2026-03-28 01:03:07.117834 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.76s 2026-03-28 01:03:07.117838 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.74s 2026-03-28 01:03:07.117842 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.38s 2026-03-28 01:03:07.117846 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.95s 2026-03-28 01:03:07.117850 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.82s 2026-03-28 01:03:07.117858 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-03-28 01:03:07.117862 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.70s 2026-03-28 01:03:07.117866 | orchestrator | horizon : Update policy file name --------------------------------------- 0.62s 2026-03-28 01:03:07.117870 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-03-28 01:03:07.117874 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-03-28 01:03:07.117878 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2026-03-28 01:03:07.117882 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-03-28 01:03:07.117886 | orchestrator | 2026-03-28 01:03:07 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:07.117890 | orchestrator | 2026-03-28 01:03:07 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:07.117894 | orchestrator | 2026-03-28 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:10.180588 | orchestrator | 2026-03-28 01:03:10 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:10.183634 | orchestrator | 2026-03-28 01:03:10 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:10.183709 | orchestrator | 2026-03-28 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:13.230816 | orchestrator | 2026-03-28 01:03:13 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:13.232684 | orchestrator | 2026-03-28 01:03:13 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:13.232762 | orchestrator | 2026-03-28 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:16.293664 | orchestrator | 2026-03-28 01:03:16 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:16.295783 | orchestrator | 2026-03-28 01:03:16 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:16.295874 | orchestrator | 2026-03-28 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:19.340936 | orchestrator | 2026-03-28 01:03:19 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:19.341411 | orchestrator | 2026-03-28 01:03:19 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:19.341779 | orchestrator | 2026-03-28 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:22.392448 | orchestrator | 2026-03-28 01:03:22 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:22.394232 | orchestrator | 2026-03-28 01:03:22 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:22.394600 | orchestrator | 2026-03-28 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:25.443181 | orchestrator | 2026-03-28 01:03:25 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:25.444332 | orchestrator | 2026-03-28 01:03:25 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:25.444457 | orchestrator | 2026-03-28 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:28.489769 | orchestrator | 2026-03-28 01:03:28 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:28.491797 | orchestrator | 2026-03-28 01:03:28 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:28.491842 | orchestrator | 2026-03-28 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:31.540555 | orchestrator | 2026-03-28 01:03:31 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:31.541895 | orchestrator | 2026-03-28 01:03:31 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:31.541933 | orchestrator | 2026-03-28 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:34.583306 | orchestrator | 2026-03-28 01:03:34 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:34.584531 | orchestrator | 2026-03-28 01:03:34 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:34.584588 | orchestrator | 2026-03-28 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:37.634161 | orchestrator | 2026-03-28 01:03:37 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:37.635478 | orchestrator | 2026-03-28 01:03:37 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:37.635526 | orchestrator | 2026-03-28 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:40.678258 | orchestrator | 2026-03-28 01:03:40 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:40.679314 | orchestrator | 2026-03-28 01:03:40 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:40.679355 | orchestrator | 2026-03-28 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:43.729183 | orchestrator | 2026-03-28 01:03:43 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:43.731276 | orchestrator | 2026-03-28 01:03:43 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:43.731466 | orchestrator | 2026-03-28 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:46.784689 | orchestrator | 2026-03-28 01:03:46 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:46.786660 | orchestrator | 2026-03-28 01:03:46 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:46.786706 | orchestrator | 2026-03-28 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:49.833213 | orchestrator | 2026-03-28 01:03:49 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:49.834571 | orchestrator | 2026-03-28 01:03:49 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:49.835273 | orchestrator | 2026-03-28 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:52.873100 | orchestrator | 2026-03-28 01:03:52 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:52.875154 | orchestrator | 2026-03-28 01:03:52 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:52.875228 | orchestrator | 2026-03-28 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:55.911855 | orchestrator | 2026-03-28 01:03:55 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:55.915131 | orchestrator | 2026-03-28 01:03:55 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:55.915234 | orchestrator | 2026-03-28 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:58.963216 | orchestrator | 2026-03-28 01:03:58 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state STARTED 2026-03-28 01:03:58.966182 | orchestrator | 2026-03-28 01:03:58 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:03:58.966253 | orchestrator | 2026-03-28 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:02.002443 | orchestrator | 2026-03-28 01:04:02 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:02.004690 | orchestrator | 2026-03-28 01:04:02 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:02.005600 | orchestrator | 2026-03-28 01:04:02 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:02.006538 | orchestrator | 2026-03-28 01:04:02 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:02.008850 | orchestrator | 2026-03-28 01:04:02 | INFO  | Task 4a689a7d-8c72-46f9-aaf3-956fb5ec7869 is in state SUCCESS 2026-03-28 01:04:02.011326 | orchestrator | 2026-03-28 01:04:02.011399 | orchestrator | 2026-03-28 01:04:02.011415 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:04:02.011428 | orchestrator | 2026-03-28 01:04:02.011439 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:04:02.011451 | orchestrator | Saturday 28 March 2026 01:01:13 +0000 (0:00:00.278) 0:00:00.278 ******** 2026-03-28 01:04:02.011463 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:04:02.011476 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:04:02.011487 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:04:02.011498 | orchestrator | 2026-03-28 01:04:02.011509 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:04:02.011520 | orchestrator | Saturday 28 March 2026 01:01:13 +0000 (0:00:00.313) 0:00:00.592 ******** 2026-03-28 01:04:02.011531 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-28 01:04:02.011543 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-28 01:04:02.011553 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-28 01:04:02.011564 | orchestrator | 2026-03-28 01:04:02.011575 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-28 01:04:02.011586 | orchestrator | 2026-03-28 01:04:02.011597 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:04:02.011608 | orchestrator | Saturday 28 March 2026 01:01:14 +0000 (0:00:00.536) 0:00:01.129 ******** 2026-03-28 01:04:02.011619 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:04:02.011631 | orchestrator | 2026-03-28 01:04:02.011641 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-28 01:04:02.011652 | orchestrator | Saturday 28 March 2026 01:01:14 +0000 (0:00:00.656) 0:00:01.785 ******** 2026-03-28 01:04:02.011774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.011794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.011868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.011884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:04:02.011897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:04:02.011910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:04:02.012586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.012642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.012665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.012677 | orchestrator | 2026-03-28 01:04:02.012689 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-28 01:04:02.012714 | orchestrator | Saturday 28 March 2026 01:01:16 +0000 (0:00:02.045) 0:00:03.831 ******** 2026-03-28 01:04:02.012726 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.012738 | orchestrator | 2026-03-28 01:04:02.012749 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-28 01:04:02.012760 | orchestrator | Saturday 28 March 2026 01:01:17 +0000 (0:00:00.147) 0:00:03.978 ******** 2026-03-28 01:04:02.012771 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.012782 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:04:02.012793 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:04:02.012803 | orchestrator | 2026-03-28 01:04:02.012814 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-28 01:04:02.012825 | orchestrator | Saturday 28 March 2026 01:01:17 +0000 (0:00:00.463) 0:00:04.442 ******** 2026-03-28 01:04:02.012836 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:04:02.012847 | orchestrator | 2026-03-28 01:04:02.012858 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:04:02.012869 | orchestrator | Saturday 28 March 2026 01:01:18 +0000 (0:00:00.893) 0:00:05.335 ******** 2026-03-28 01:04:02.012880 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:04:02.012891 | orchestrator | 2026-03-28 01:04:02.012901 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-28 01:04:02.012915 | orchestrator | Saturday 28 March 2026 01:01:19 +0000 (0:00:00.583) 0:00:05.919 ******** 2026-03-28 01:04:02.012970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.013003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.013046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.013069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:04:02.013089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:04:02.013109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:04:02.013138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.013160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.013187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.013207 | orchestrator | 2026-03-28 01:04:02.013226 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-28 01:04:02.013245 | orchestrator | Saturday 28 March 2026 01:01:22 +0000 (0:00:03.870) 0:00:09.790 ******** 2026-03-28 01:04:02.013281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:04:02.013302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:04:02.013334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:04:02.013354 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.013374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:04:02.013403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:04:02.013436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:04:02.013457 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:04:02.013477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:04:02.013509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:04:02.013527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:04:02.013546 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:04:02.013564 | orchestrator | 2026-03-28 01:04:02.013581 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-28 01:04:02.013598 | orchestrator | Saturday 28 March 2026 01:01:23 +0000 (0:00:00.585) 0:00:10.375 ******** 2026-03-28 01:04:02.013624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:04:02.013655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:04:02.013675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:04:02.013713 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.013733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:04:02.013751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:04:02.013770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:04:02.013788 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:04:02.013825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:04:02.013845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:04:02.013875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:04:02.013894 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:04:02.013913 | orchestrator | 2026-03-28 01:04:02.013984 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-28 01:04:02.014004 | orchestrator | Saturday 28 March 2026 01:01:24 +0000 (0:00:00.977) 0:00:11.352 ******** 2026-03-28 01:04:02.014106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.014128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.014154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.014179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:04:02.014190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:04:02.014202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:04:02.014213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.014230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.014250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.014270 | orchestrator | 2026-03-28 01:04:02.014281 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-28 01:04:02.014292 | orchestrator | Saturday 28 March 2026 01:01:28 +0000 (0:00:03.806) 0:00:15.159 ******** 2026-03-28 01:04:02.014304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.014316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:04:02.014332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.014357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:04:02.014400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.014419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:04:02.014431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.014442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.014454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.014465 | orchestrator | 2026-03-28 01:04:02.014476 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-28 01:04:02.014493 | orchestrator | Saturday 28 March 2026 01:01:34 +0000 (0:00:05.859) 0:00:21.018 ******** 2026-03-28 01:04:02.014505 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:04:02.014516 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:04:02.014535 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:04:02.014546 | orchestrator | 2026-03-28 01:04:02.014558 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-28 01:04:02.014572 | orchestrator | Saturday 28 March 2026 01:01:35 +0000 (0:00:01.709) 0:00:22.727 ******** 2026-03-28 01:04:02.014591 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.014607 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:04:02.014622 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:04:02.014638 | orchestrator | 2026-03-28 01:04:02.014652 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-28 01:04:02.014674 | orchestrator | Saturday 28 March 2026 01:01:36 +0000 (0:00:00.548) 0:00:23.276 ******** 2026-03-28 01:04:02.014690 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.014704 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:04:02.014720 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:04:02.014736 | orchestrator | 2026-03-28 01:04:02.014754 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-28 01:04:02.014770 | orchestrator | Saturday 28 March 2026 01:01:36 +0000 (0:00:00.316) 0:00:23.592 ******** 2026-03-28 01:04:02.014786 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.014801 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:04:02.014811 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:04:02.014821 | orchestrator | 2026-03-28 01:04:02.014831 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-28 01:04:02.014841 | orchestrator | Saturday 28 March 2026 01:01:37 +0000 (0:00:00.529) 0:00:24.121 ******** 2026-03-28 01:04:02.014852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:04:02.014863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:04:02.014874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:04:02.014884 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.014908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:04:02.014956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:04:02.014968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:04:02.014979 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:04:02.014989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:04:02.015000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:04:02.015018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:04:02.015028 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:04:02.015038 | orchestrator | 2026-03-28 01:04:02.015053 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:04:02.015063 | orchestrator | Saturday 28 March 2026 01:01:37 +0000 (0:00:00.679) 0:00:24.801 ******** 2026-03-28 01:04:02.015073 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.015083 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:04:02.015092 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:04:02.015102 | orchestrator | 2026-03-28 01:04:02.015112 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-28 01:04:02.015122 | orchestrator | Saturday 28 March 2026 01:01:38 +0000 (0:00:00.362) 0:00:25.164 ******** 2026-03-28 01:04:02.015132 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-28 01:04:02.015148 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-28 01:04:02.015158 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-28 01:04:02.015168 | orchestrator | 2026-03-28 01:04:02.015178 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-28 01:04:02.015188 | orchestrator | Saturday 28 March 2026 01:01:39 +0000 (0:00:01.637) 0:00:26.801 ******** 2026-03-28 01:04:02.015198 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:04:02.015207 | orchestrator | 2026-03-28 01:04:02.015217 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-28 01:04:02.015227 | orchestrator | Saturday 28 March 2026 01:01:40 +0000 (0:00:01.011) 0:00:27.813 ******** 2026-03-28 01:04:02.015237 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:04:02.015247 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.015256 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:04:02.015266 | orchestrator | 2026-03-28 01:04:02.015276 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-28 01:04:02.015286 | orchestrator | Saturday 28 March 2026 01:01:42 +0000 (0:00:01.092) 0:00:28.905 ******** 2026-03-28 01:04:02.015295 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 01:04:02.015305 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 01:04:02.015315 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:04:02.015325 | orchestrator | 2026-03-28 01:04:02.015334 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-28 01:04:02.015344 | orchestrator | Saturday 28 March 2026 01:01:43 +0000 (0:00:01.139) 0:00:30.045 ******** 2026-03-28 01:04:02.015354 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:04:02.015365 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:04:02.015374 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:04:02.015384 | orchestrator | 2026-03-28 01:04:02.015394 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-28 01:04:02.015404 | orchestrator | Saturday 28 March 2026 01:01:43 +0000 (0:00:00.342) 0:00:30.387 ******** 2026-03-28 01:04:02.015414 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-28 01:04:02.015423 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-28 01:04:02.015444 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-28 01:04:02.015455 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-28 01:04:02.015465 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-28 01:04:02.015475 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-28 01:04:02.015485 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-28 01:04:02.015495 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-28 01:04:02.015504 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-28 01:04:02.015514 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-28 01:04:02.015524 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-28 01:04:02.015533 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-28 01:04:02.015543 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-28 01:04:02.015553 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-28 01:04:02.015562 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-28 01:04:02.015572 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:04:02.015582 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:04:02.015592 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:04:02.015601 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:04:02.015611 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:04:02.015621 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:04:02.015630 | orchestrator | 2026-03-28 01:04:02.015640 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-28 01:04:02.015654 | orchestrator | Saturday 28 March 2026 01:01:52 +0000 (0:00:09.168) 0:00:39.555 ******** 2026-03-28 01:04:02.015665 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:04:02.015675 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:04:02.015684 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:04:02.015694 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:04:02.015704 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:04:02.015720 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:04:02.015730 | orchestrator | 2026-03-28 01:04:02.015740 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-28 01:04:02.015750 | orchestrator | Saturday 28 March 2026 01:01:55 +0000 (0:00:03.047) 0:00:42.603 ******** 2026-03-28 01:04:02.015761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.015780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.015792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:04:02.015808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:04:02.015824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:04:02.015842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:04:02.015852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.015863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.015873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:04:02.015884 | orchestrator | 2026-03-28 01:04:02.015894 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:04:02.015904 | orchestrator | Saturday 28 March 2026 01:01:58 +0000 (0:00:02.381) 0:00:44.984 ******** 2026-03-28 01:04:02.015913 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.015950 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:04:02.015965 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:04:02.015975 | orchestrator | 2026-03-28 01:04:02.015985 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-28 01:04:02.015995 | orchestrator | Saturday 28 March 2026 01:01:58 +0000 (0:00:00.295) 0:00:45.279 ******** 2026-03-28 01:04:02.016004 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:04:02.016014 | orchestrator | 2026-03-28 01:04:02.016028 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-28 01:04:02.016038 | orchestrator | Saturday 28 March 2026 01:02:00 +0000 (0:00:02.373) 0:00:47.653 ******** 2026-03-28 01:04:02.016048 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:04:02.016058 | orchestrator | 2026-03-28 01:04:02.016068 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-28 01:04:02.016078 | orchestrator | Saturday 28 March 2026 01:02:03 +0000 (0:00:02.404) 0:00:50.058 ******** 2026-03-28 01:04:02.016088 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:04:02.016098 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:04:02.016114 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:04:02.016124 | orchestrator | 2026-03-28 01:04:02.016134 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-28 01:04:02.016149 | orchestrator | Saturday 28 March 2026 01:02:04 +0000 (0:00:01.117) 0:00:51.175 ******** 2026-03-28 01:04:02.016187 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:04:02.016199 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:04:02.016208 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:04:02.016221 | orchestrator | 2026-03-28 01:04:02.016237 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-28 01:04:02.016254 | orchestrator | Saturday 28 March 2026 01:02:04 +0000 (0:00:00.344) 0:00:51.520 ******** 2026-03-28 01:04:02.016271 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.016287 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:04:02.016303 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:04:02.016335 | orchestrator | 2026-03-28 01:04:02.016354 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-28 01:04:02.016373 | orchestrator | Saturday 28 March 2026 01:02:04 +0000 (0:00:00.332) 0:00:51.852 ******** 2026-03-28 01:04:02.016391 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:04:02.016408 | orchestrator | 2026-03-28 01:04:02.016426 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-28 01:04:02.016446 | orchestrator | Saturday 28 March 2026 01:02:20 +0000 (0:00:15.585) 0:01:07.437 ******** 2026-03-28 01:04:02.016462 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:04:02.016500 | orchestrator | 2026-03-28 01:04:02.016521 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-28 01:04:02.016538 | orchestrator | Saturday 28 March 2026 01:02:32 +0000 (0:00:11.829) 0:01:19.267 ******** 2026-03-28 01:04:02.016556 | orchestrator | 2026-03-28 01:04:02.016574 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-28 01:04:02.016594 | orchestrator | Saturday 28 March 2026 01:02:32 +0000 (0:00:00.069) 0:01:19.337 ******** 2026-03-28 01:04:02.016612 | orchestrator | 2026-03-28 01:04:02.016632 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-28 01:04:02.016650 | orchestrator | Saturday 28 March 2026 01:02:32 +0000 (0:00:00.068) 0:01:19.406 ******** 2026-03-28 01:04:02.016669 | orchestrator | 2026-03-28 01:04:02.016688 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-28 01:04:02.016707 | orchestrator | Saturday 28 March 2026 01:02:32 +0000 (0:00:00.066) 0:01:19.472 ******** 2026-03-28 01:04:02.016726 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:04:02.016744 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:04:02.016764 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:04:02.016783 | orchestrator | 2026-03-28 01:04:02.016802 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-28 01:04:02.016821 | orchestrator | Saturday 28 March 2026 01:02:52 +0000 (0:00:19.936) 0:01:39.408 ******** 2026-03-28 01:04:02.016841 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:04:02.016881 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:04:02.016901 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:04:02.016918 | orchestrator | 2026-03-28 01:04:02.016964 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-28 01:04:02.016984 | orchestrator | Saturday 28 March 2026 01:02:57 +0000 (0:00:05.189) 0:01:44.598 ******** 2026-03-28 01:04:02.017005 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:04:02.017024 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:04:02.017044 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:04:02.017063 | orchestrator | 2026-03-28 01:04:02.017082 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:04:02.017097 | orchestrator | Saturday 28 March 2026 01:03:05 +0000 (0:00:07.377) 0:01:51.975 ******** 2026-03-28 01:04:02.017108 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:04:02.017119 | orchestrator | 2026-03-28 01:04:02.017149 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-28 01:04:02.017168 | orchestrator | Saturday 28 March 2026 01:03:05 +0000 (0:00:00.681) 0:01:52.657 ******** 2026-03-28 01:04:02.017188 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:04:02.017205 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:04:02.017225 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:04:02.017243 | orchestrator | 2026-03-28 01:04:02.017261 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-28 01:04:02.017280 | orchestrator | Saturday 28 March 2026 01:03:06 +0000 (0:00:00.809) 0:01:53.466 ******** 2026-03-28 01:04:02.017298 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:04:02.017315 | orchestrator | 2026-03-28 01:04:02.017332 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-28 01:04:02.017349 | orchestrator | Saturday 28 March 2026 01:03:08 +0000 (0:00:01.839) 0:01:55.306 ******** 2026-03-28 01:04:02.017366 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-28 01:04:02.017385 | orchestrator | 2026-03-28 01:04:02.017404 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-28 01:04:02.017422 | orchestrator | Saturday 28 March 2026 01:03:21 +0000 (0:00:13.030) 0:02:08.336 ******** 2026-03-28 01:04:02.017441 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-28 01:04:02.017461 | orchestrator | 2026-03-28 01:04:02.017478 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-28 01:04:02.017497 | orchestrator | Saturday 28 March 2026 01:03:47 +0000 (0:00:26.074) 0:02:34.411 ******** 2026-03-28 01:04:02.017526 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-28 01:04:02.017545 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-28 01:04:02.017564 | orchestrator | 2026-03-28 01:04:02.017581 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-28 01:04:02.017601 | orchestrator | Saturday 28 March 2026 01:03:54 +0000 (0:00:07.030) 0:02:41.441 ******** 2026-03-28 01:04:02.017618 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.017637 | orchestrator | 2026-03-28 01:04:02.017656 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-28 01:04:02.017673 | orchestrator | Saturday 28 March 2026 01:03:54 +0000 (0:00:00.139) 0:02:41.580 ******** 2026-03-28 01:04:02.017692 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.017710 | orchestrator | 2026-03-28 01:04:02.017748 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-28 01:04:02.017768 | orchestrator | Saturday 28 March 2026 01:03:54 +0000 (0:00:00.116) 0:02:41.697 ******** 2026-03-28 01:04:02.017788 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.017807 | orchestrator | 2026-03-28 01:04:02.017827 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-28 01:04:02.017847 | orchestrator | Saturday 28 March 2026 01:03:54 +0000 (0:00:00.131) 0:02:41.828 ******** 2026-03-28 01:04:02.017866 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.017885 | orchestrator | 2026-03-28 01:04:02.017903 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-28 01:04:02.017994 | orchestrator | Saturday 28 March 2026 01:03:55 +0000 (0:00:00.586) 0:02:42.415 ******** 2026-03-28 01:04:02.018065 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:04:02.018091 | orchestrator | 2026-03-28 01:04:02.018111 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:04:02.018131 | orchestrator | Saturday 28 March 2026 01:03:59 +0000 (0:00:03.510) 0:02:45.925 ******** 2026-03-28 01:04:02.018151 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:04:02.018172 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:04:02.018185 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:04:02.018195 | orchestrator | 2026-03-28 01:04:02.018207 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:04:02.018233 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 01:04:02.018245 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:04:02.018257 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:04:02.018267 | orchestrator | 2026-03-28 01:04:02.018278 | orchestrator | 2026-03-28 01:04:02.018289 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:04:02.018300 | orchestrator | Saturday 28 March 2026 01:03:59 +0000 (0:00:00.449) 0:02:46.375 ******** 2026-03-28 01:04:02.018311 | orchestrator | =============================================================================== 2026-03-28 01:04:02.018323 | orchestrator | service-ks-register : keystone | Creating services --------------------- 26.07s 2026-03-28 01:04:02.018333 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.94s 2026-03-28 01:04:02.018344 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.59s 2026-03-28 01:04:02.018355 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.03s 2026-03-28 01:04:02.018366 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.83s 2026-03-28 01:04:02.018377 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.17s 2026-03-28 01:04:02.018388 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.38s 2026-03-28 01:04:02.018398 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.03s 2026-03-28 01:04:02.018437 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.86s 2026-03-28 01:04:02.018448 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.19s 2026-03-28 01:04:02.018460 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.87s 2026-03-28 01:04:02.018470 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.81s 2026-03-28 01:04:02.018481 | orchestrator | keystone : Creating default user role ----------------------------------- 3.51s 2026-03-28 01:04:02.018492 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.05s 2026-03-28 01:04:02.018503 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.40s 2026-03-28 01:04:02.018514 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.38s 2026-03-28 01:04:02.018525 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.37s 2026-03-28 01:04:02.018535 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.05s 2026-03-28 01:04:02.018544 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.84s 2026-03-28 01:04:02.018554 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.71s 2026-03-28 01:04:02.018563 | orchestrator | 2026-03-28 01:04:02 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:04:02.018573 | orchestrator | 2026-03-28 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:05.073090 | orchestrator | 2026-03-28 01:04:05 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:05.073199 | orchestrator | 2026-03-28 01:04:05 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:05.073216 | orchestrator | 2026-03-28 01:04:05 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:05.073228 | orchestrator | 2026-03-28 01:04:05 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:05.073239 | orchestrator | 2026-03-28 01:04:05 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:04:05.073277 | orchestrator | 2026-03-28 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:08.141112 | orchestrator | 2026-03-28 01:04:08 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:08.141401 | orchestrator | 2026-03-28 01:04:08 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:08.142235 | orchestrator | 2026-03-28 01:04:08 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:08.142899 | orchestrator | 2026-03-28 01:04:08 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:08.143565 | orchestrator | 2026-03-28 01:04:08 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state STARTED 2026-03-28 01:04:08.143653 | orchestrator | 2026-03-28 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:11.193697 | orchestrator | 2026-03-28 01:04:11 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:11.194068 | orchestrator | 2026-03-28 01:04:11 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:11.195845 | orchestrator | 2026-03-28 01:04:11 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:11.197156 | orchestrator | 2026-03-28 01:04:11 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:11.198657 | orchestrator | 2026-03-28 01:04:11 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:11.200237 | orchestrator | 2026-03-28 01:04:11 | INFO  | Task 101dafb3-32fe-4bc4-b7c7-a384a7b3f218 is in state SUCCESS 2026-03-28 01:04:11.200512 | orchestrator | 2026-03-28 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:14.260612 | orchestrator | 2026-03-28 01:04:14 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:14.261559 | orchestrator | 2026-03-28 01:04:14 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:14.265387 | orchestrator | 2026-03-28 01:04:14 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:14.268556 | orchestrator | 2026-03-28 01:04:14 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:14.278470 | orchestrator | 2026-03-28 01:04:14 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:14.278581 | orchestrator | 2026-03-28 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:17.324693 | orchestrator | 2026-03-28 01:04:17 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:17.327307 | orchestrator | 2026-03-28 01:04:17 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:17.329299 | orchestrator | 2026-03-28 01:04:17 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:17.331099 | orchestrator | 2026-03-28 01:04:17 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:17.333297 | orchestrator | 2026-03-28 01:04:17 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:17.333332 | orchestrator | 2026-03-28 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:20.378580 | orchestrator | 2026-03-28 01:04:20 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:20.379259 | orchestrator | 2026-03-28 01:04:20 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:20.380830 | orchestrator | 2026-03-28 01:04:20 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:20.382204 | orchestrator | 2026-03-28 01:04:20 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:20.383340 | orchestrator | 2026-03-28 01:04:20 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:20.383376 | orchestrator | 2026-03-28 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:23.442240 | orchestrator | 2026-03-28 01:04:23 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:23.445952 | orchestrator | 2026-03-28 01:04:23 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:23.452459 | orchestrator | 2026-03-28 01:04:23 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:23.460006 | orchestrator | 2026-03-28 01:04:23 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:23.462087 | orchestrator | 2026-03-28 01:04:23 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:23.462151 | orchestrator | 2026-03-28 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:26.522317 | orchestrator | 2026-03-28 01:04:26 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:26.527307 | orchestrator | 2026-03-28 01:04:26 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:26.529361 | orchestrator | 2026-03-28 01:04:26 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:26.532197 | orchestrator | 2026-03-28 01:04:26 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:26.534574 | orchestrator | 2026-03-28 01:04:26 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:26.534661 | orchestrator | 2026-03-28 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:29.581668 | orchestrator | 2026-03-28 01:04:29 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:29.584358 | orchestrator | 2026-03-28 01:04:29 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:29.586306 | orchestrator | 2026-03-28 01:04:29 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:29.588513 | orchestrator | 2026-03-28 01:04:29 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:29.591116 | orchestrator | 2026-03-28 01:04:29 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:29.591176 | orchestrator | 2026-03-28 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:32.637074 | orchestrator | 2026-03-28 01:04:32 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:32.639213 | orchestrator | 2026-03-28 01:04:32 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:32.641427 | orchestrator | 2026-03-28 01:04:32 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:32.642881 | orchestrator | 2026-03-28 01:04:32 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:32.643783 | orchestrator | 2026-03-28 01:04:32 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:32.643825 | orchestrator | 2026-03-28 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:35.688762 | orchestrator | 2026-03-28 01:04:35 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:35.689786 | orchestrator | 2026-03-28 01:04:35 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:35.690776 | orchestrator | 2026-03-28 01:04:35 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:35.691554 | orchestrator | 2026-03-28 01:04:35 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:35.692757 | orchestrator | 2026-03-28 01:04:35 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:35.692797 | orchestrator | 2026-03-28 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:38.737022 | orchestrator | 2026-03-28 01:04:38 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:38.738316 | orchestrator | 2026-03-28 01:04:38 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:38.740496 | orchestrator | 2026-03-28 01:04:38 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:38.742073 | orchestrator | 2026-03-28 01:04:38 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:38.743055 | orchestrator | 2026-03-28 01:04:38 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:38.743117 | orchestrator | 2026-03-28 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:41.801329 | orchestrator | 2026-03-28 01:04:41 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:41.805650 | orchestrator | 2026-03-28 01:04:41 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:41.808706 | orchestrator | 2026-03-28 01:04:41 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:41.811187 | orchestrator | 2026-03-28 01:04:41 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:41.813583 | orchestrator | 2026-03-28 01:04:41 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:41.814292 | orchestrator | 2026-03-28 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:44.859850 | orchestrator | 2026-03-28 01:04:44 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:44.860380 | orchestrator | 2026-03-28 01:04:44 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:44.861605 | orchestrator | 2026-03-28 01:04:44 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:44.864421 | orchestrator | 2026-03-28 01:04:44 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:44.865209 | orchestrator | 2026-03-28 01:04:44 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:44.865243 | orchestrator | 2026-03-28 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:47.902787 | orchestrator | 2026-03-28 01:04:47 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:47.903677 | orchestrator | 2026-03-28 01:04:47 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:47.906154 | orchestrator | 2026-03-28 01:04:47 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:47.907615 | orchestrator | 2026-03-28 01:04:47 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:47.911307 | orchestrator | 2026-03-28 01:04:47 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:47.911375 | orchestrator | 2026-03-28 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:50.943593 | orchestrator | 2026-03-28 01:04:50 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:50.943985 | orchestrator | 2026-03-28 01:04:50 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:50.944866 | orchestrator | 2026-03-28 01:04:50 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:50.946379 | orchestrator | 2026-03-28 01:04:50 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:50.947306 | orchestrator | 2026-03-28 01:04:50 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:50.947424 | orchestrator | 2026-03-28 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:53.980252 | orchestrator | 2026-03-28 01:04:53 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:53.983227 | orchestrator | 2026-03-28 01:04:53 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:53.985260 | orchestrator | 2026-03-28 01:04:53 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:53.985283 | orchestrator | 2026-03-28 01:04:53 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:53.985289 | orchestrator | 2026-03-28 01:04:53 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:53.985296 | orchestrator | 2026-03-28 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:57.065775 | orchestrator | 2026-03-28 01:04:57 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:04:57.067090 | orchestrator | 2026-03-28 01:04:57 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:04:57.068462 | orchestrator | 2026-03-28 01:04:57 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:04:57.069526 | orchestrator | 2026-03-28 01:04:57 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:04:57.072289 | orchestrator | 2026-03-28 01:04:57 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:04:57.072350 | orchestrator | 2026-03-28 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:00.111272 | orchestrator | 2026-03-28 01:05:00 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:00.112091 | orchestrator | 2026-03-28 01:05:00 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:00.112604 | orchestrator | 2026-03-28 01:05:00 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:00.113870 | orchestrator | 2026-03-28 01:05:00 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:00.115986 | orchestrator | 2026-03-28 01:05:00 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:00.116043 | orchestrator | 2026-03-28 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:03.148861 | orchestrator | 2026-03-28 01:05:03 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:03.149649 | orchestrator | 2026-03-28 01:05:03 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:03.150705 | orchestrator | 2026-03-28 01:05:03 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:03.151485 | orchestrator | 2026-03-28 01:05:03 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:03.152454 | orchestrator | 2026-03-28 01:05:03 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:03.152507 | orchestrator | 2026-03-28 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:06.188822 | orchestrator | 2026-03-28 01:05:06 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:06.189389 | orchestrator | 2026-03-28 01:05:06 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:06.190140 | orchestrator | 2026-03-28 01:05:06 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:06.191211 | orchestrator | 2026-03-28 01:05:06 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:06.192154 | orchestrator | 2026-03-28 01:05:06 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:06.192217 | orchestrator | 2026-03-28 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:09.231697 | orchestrator | 2026-03-28 01:05:09 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:09.232082 | orchestrator | 2026-03-28 01:05:09 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:09.232720 | orchestrator | 2026-03-28 01:05:09 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:09.234344 | orchestrator | 2026-03-28 01:05:09 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:09.234433 | orchestrator | 2026-03-28 01:05:09 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:09.234450 | orchestrator | 2026-03-28 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:12.282295 | orchestrator | 2026-03-28 01:05:12 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:12.284011 | orchestrator | 2026-03-28 01:05:12 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:12.284049 | orchestrator | 2026-03-28 01:05:12 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:12.284726 | orchestrator | 2026-03-28 01:05:12 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:12.285716 | orchestrator | 2026-03-28 01:05:12 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:12.285786 | orchestrator | 2026-03-28 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:15.324344 | orchestrator | 2026-03-28 01:05:15 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:15.327076 | orchestrator | 2026-03-28 01:05:15 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:15.327159 | orchestrator | 2026-03-28 01:05:15 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:15.327901 | orchestrator | 2026-03-28 01:05:15 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:15.328991 | orchestrator | 2026-03-28 01:05:15 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:15.329223 | orchestrator | 2026-03-28 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:18.364715 | orchestrator | 2026-03-28 01:05:18 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:18.365529 | orchestrator | 2026-03-28 01:05:18 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:18.366656 | orchestrator | 2026-03-28 01:05:18 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:18.368122 | orchestrator | 2026-03-28 01:05:18 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:18.368478 | orchestrator | 2026-03-28 01:05:18 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:18.368610 | orchestrator | 2026-03-28 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:21.408446 | orchestrator | 2026-03-28 01:05:21 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:21.409975 | orchestrator | 2026-03-28 01:05:21 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:21.410662 | orchestrator | 2026-03-28 01:05:21 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:21.410691 | orchestrator | 2026-03-28 01:05:21 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:21.411832 | orchestrator | 2026-03-28 01:05:21 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:21.411851 | orchestrator | 2026-03-28 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:24.451264 | orchestrator | 2026-03-28 01:05:24 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:24.451419 | orchestrator | 2026-03-28 01:05:24 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:24.451946 | orchestrator | 2026-03-28 01:05:24 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:24.452652 | orchestrator | 2026-03-28 01:05:24 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:24.453600 | orchestrator | 2026-03-28 01:05:24 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:24.453639 | orchestrator | 2026-03-28 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:27.480264 | orchestrator | 2026-03-28 01:05:27 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:27.480659 | orchestrator | 2026-03-28 01:05:27 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:27.481375 | orchestrator | 2026-03-28 01:05:27 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:27.482174 | orchestrator | 2026-03-28 01:05:27 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:27.483235 | orchestrator | 2026-03-28 01:05:27 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:27.483262 | orchestrator | 2026-03-28 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:30.519068 | orchestrator | 2026-03-28 01:05:30 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:30.520967 | orchestrator | 2026-03-28 01:05:30 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:30.524823 | orchestrator | 2026-03-28 01:05:30 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:30.526408 | orchestrator | 2026-03-28 01:05:30 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:30.528433 | orchestrator | 2026-03-28 01:05:30 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:30.528568 | orchestrator | 2026-03-28 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:33.567280 | orchestrator | 2026-03-28 01:05:33 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:33.567687 | orchestrator | 2026-03-28 01:05:33 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:33.568756 | orchestrator | 2026-03-28 01:05:33 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:33.569686 | orchestrator | 2026-03-28 01:05:33 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:33.570617 | orchestrator | 2026-03-28 01:05:33 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:33.570667 | orchestrator | 2026-03-28 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:36.606820 | orchestrator | 2026-03-28 01:05:36 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:36.607459 | orchestrator | 2026-03-28 01:05:36 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:36.608308 | orchestrator | 2026-03-28 01:05:36 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:36.609188 | orchestrator | 2026-03-28 01:05:36 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:36.612288 | orchestrator | 2026-03-28 01:05:36 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:36.612344 | orchestrator | 2026-03-28 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:39.647949 | orchestrator | 2026-03-28 01:05:39 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:39.648589 | orchestrator | 2026-03-28 01:05:39 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:39.649333 | orchestrator | 2026-03-28 01:05:39 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:39.650217 | orchestrator | 2026-03-28 01:05:39 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:39.651143 | orchestrator | 2026-03-28 01:05:39 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:39.651253 | orchestrator | 2026-03-28 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:42.692316 | orchestrator | 2026-03-28 01:05:42 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:42.692581 | orchestrator | 2026-03-28 01:05:42 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state STARTED 2026-03-28 01:05:42.693909 | orchestrator | 2026-03-28 01:05:42 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:42.695807 | orchestrator | 2026-03-28 01:05:42 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:42.696352 | orchestrator | 2026-03-28 01:05:42 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:42.696378 | orchestrator | 2026-03-28 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:45.844494 | orchestrator | 2026-03-28 01:05:45 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:45.852032 | orchestrator | 2026-03-28 01:05:45.852178 | orchestrator | 2026-03-28 01:05:45.852206 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-28 01:05:45.852227 | orchestrator | 2026-03-28 01:05:45.852245 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-28 01:05:45.852264 | orchestrator | Saturday 28 March 2026 01:03:10 +0000 (0:00:00.261) 0:00:00.261 ******** 2026-03-28 01:05:45.852327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-28 01:05:45.852343 | orchestrator | 2026-03-28 01:05:45.852355 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-28 01:05:45.852367 | orchestrator | Saturday 28 March 2026 01:03:10 +0000 (0:00:00.250) 0:00:00.512 ******** 2026-03-28 01:05:45.852378 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-28 01:05:45.852390 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-28 01:05:45.852401 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-28 01:05:45.852413 | orchestrator | 2026-03-28 01:05:45.852424 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-28 01:05:45.852435 | orchestrator | Saturday 28 March 2026 01:03:11 +0000 (0:00:01.494) 0:00:02.007 ******** 2026-03-28 01:05:45.852447 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-28 01:05:45.852459 | orchestrator | 2026-03-28 01:05:45.852469 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-28 01:05:45.852480 | orchestrator | Saturday 28 March 2026 01:03:13 +0000 (0:00:01.595) 0:00:03.603 ******** 2026-03-28 01:05:45.852491 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.852503 | orchestrator | 2026-03-28 01:05:45.852514 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-28 01:05:45.852525 | orchestrator | Saturday 28 March 2026 01:03:14 +0000 (0:00:00.986) 0:00:04.589 ******** 2026-03-28 01:05:45.852555 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.852566 | orchestrator | 2026-03-28 01:05:45.852577 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-28 01:05:45.852588 | orchestrator | Saturday 28 March 2026 01:03:15 +0000 (0:00:01.057) 0:00:05.647 ******** 2026-03-28 01:05:45.852599 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-28 01:05:45.852609 | orchestrator | ok: [testbed-manager] 2026-03-28 01:05:45.852621 | orchestrator | 2026-03-28 01:05:45.852633 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-28 01:05:45.852644 | orchestrator | Saturday 28 March 2026 01:03:58 +0000 (0:00:42.724) 0:00:48.371 ******** 2026-03-28 01:05:45.852655 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-28 01:05:45.852666 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-28 01:05:45.852677 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-28 01:05:45.852688 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-28 01:05:45.852699 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-28 01:05:45.852712 | orchestrator | 2026-03-28 01:05:45.852730 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-28 01:05:45.852759 | orchestrator | Saturday 28 March 2026 01:04:02 +0000 (0:00:04.619) 0:00:52.990 ******** 2026-03-28 01:05:45.852777 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-28 01:05:45.852794 | orchestrator | 2026-03-28 01:05:45.852812 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-28 01:05:45.852855 | orchestrator | Saturday 28 March 2026 01:04:03 +0000 (0:00:00.467) 0:00:53.458 ******** 2026-03-28 01:05:45.852876 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:05:45.852893 | orchestrator | 2026-03-28 01:05:45.852912 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-28 01:05:45.852930 | orchestrator | Saturday 28 March 2026 01:04:03 +0000 (0:00:00.123) 0:00:53.581 ******** 2026-03-28 01:05:45.852949 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:05:45.852962 | orchestrator | 2026-03-28 01:05:45.852973 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-28 01:05:45.852984 | orchestrator | Saturday 28 March 2026 01:04:04 +0000 (0:00:00.552) 0:00:54.134 ******** 2026-03-28 01:05:45.853008 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.853019 | orchestrator | 2026-03-28 01:05:45.853030 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-28 01:05:45.853041 | orchestrator | Saturday 28 March 2026 01:04:05 +0000 (0:00:01.669) 0:00:55.804 ******** 2026-03-28 01:05:45.853052 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.853064 | orchestrator | 2026-03-28 01:05:45.853075 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-28 01:05:45.853086 | orchestrator | Saturday 28 March 2026 01:04:06 +0000 (0:00:00.978) 0:00:56.782 ******** 2026-03-28 01:05:45.853096 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.853110 | orchestrator | 2026-03-28 01:05:45.853123 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-28 01:05:45.853136 | orchestrator | Saturday 28 March 2026 01:04:07 +0000 (0:00:00.635) 0:00:57.418 ******** 2026-03-28 01:05:45.853150 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-28 01:05:45.853162 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-28 01:05:45.853174 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-28 01:05:45.853185 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-28 01:05:45.853196 | orchestrator | 2026-03-28 01:05:45.853207 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:05:45.853218 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:05:45.853237 | orchestrator | 2026-03-28 01:05:45.853258 | orchestrator | 2026-03-28 01:05:45.853307 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:05:45.853327 | orchestrator | Saturday 28 March 2026 01:04:08 +0000 (0:00:01.507) 0:00:58.925 ******** 2026-03-28 01:05:45.853346 | orchestrator | =============================================================================== 2026-03-28 01:05:45.853364 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.72s 2026-03-28 01:05:45.853384 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.62s 2026-03-28 01:05:45.853402 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.67s 2026-03-28 01:05:45.853422 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.60s 2026-03-28 01:05:45.853435 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.51s 2026-03-28 01:05:45.853446 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.49s 2026-03-28 01:05:45.853456 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.06s 2026-03-28 01:05:45.853467 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.99s 2026-03-28 01:05:45.853478 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.98s 2026-03-28 01:05:45.853489 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2026-03-28 01:05:45.853499 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.55s 2026-03-28 01:05:45.853510 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2026-03-28 01:05:45.853521 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-03-28 01:05:45.853532 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2026-03-28 01:05:45.853542 | orchestrator | 2026-03-28 01:05:45.853553 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 01:05:45.853565 | orchestrator | 2.16.14 2026-03-28 01:05:45.853576 | orchestrator | 2026-03-28 01:05:45.853595 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-28 01:05:45.853606 | orchestrator | 2026-03-28 01:05:45.853617 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-28 01:05:45.853628 | orchestrator | Saturday 28 March 2026 01:04:13 +0000 (0:00:00.270) 0:00:00.270 ******** 2026-03-28 01:05:45.853651 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.853662 | orchestrator | 2026-03-28 01:05:45.853673 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-28 01:05:45.853683 | orchestrator | Saturday 28 March 2026 01:04:15 +0000 (0:00:01.774) 0:00:02.045 ******** 2026-03-28 01:05:45.853694 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.853705 | orchestrator | 2026-03-28 01:05:45.853716 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-28 01:05:45.853727 | orchestrator | Saturday 28 March 2026 01:04:16 +0000 (0:00:01.200) 0:00:03.246 ******** 2026-03-28 01:05:45.853737 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.853748 | orchestrator | 2026-03-28 01:05:45.853759 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-28 01:05:45.853770 | orchestrator | Saturday 28 March 2026 01:04:18 +0000 (0:00:01.195) 0:00:04.441 ******** 2026-03-28 01:05:45.853781 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.853792 | orchestrator | 2026-03-28 01:05:45.853802 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-28 01:05:45.853813 | orchestrator | Saturday 28 March 2026 01:04:19 +0000 (0:00:01.248) 0:00:05.690 ******** 2026-03-28 01:05:45.853824 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.853870 | orchestrator | 2026-03-28 01:05:45.853888 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-28 01:05:45.853905 | orchestrator | Saturday 28 March 2026 01:04:20 +0000 (0:00:01.156) 0:00:06.846 ******** 2026-03-28 01:05:45.853921 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.853933 | orchestrator | 2026-03-28 01:05:45.853943 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-28 01:05:45.853954 | orchestrator | Saturday 28 March 2026 01:04:21 +0000 (0:00:01.095) 0:00:07.942 ******** 2026-03-28 01:05:45.853965 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.853976 | orchestrator | 2026-03-28 01:05:45.853987 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-28 01:05:45.853998 | orchestrator | Saturday 28 March 2026 01:04:23 +0000 (0:00:02.102) 0:00:10.045 ******** 2026-03-28 01:05:45.854008 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.854186 | orchestrator | 2026-03-28 01:05:45.854206 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-28 01:05:45.854217 | orchestrator | Saturday 28 March 2026 01:04:25 +0000 (0:00:01.567) 0:00:11.612 ******** 2026-03-28 01:05:45.854228 | orchestrator | changed: [testbed-manager] 2026-03-28 01:05:45.854239 | orchestrator | 2026-03-28 01:05:45.854250 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-28 01:05:45.854261 | orchestrator | Saturday 28 March 2026 01:05:20 +0000 (0:00:54.847) 0:01:06.460 ******** 2026-03-28 01:05:45.854271 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:05:45.854282 | orchestrator | 2026-03-28 01:05:45.854293 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-28 01:05:45.854304 | orchestrator | 2026-03-28 01:05:45.854314 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-28 01:05:45.854326 | orchestrator | Saturday 28 March 2026 01:05:20 +0000 (0:00:00.187) 0:01:06.648 ******** 2026-03-28 01:05:45.854337 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:05:45.854348 | orchestrator | 2026-03-28 01:05:45.854359 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-28 01:05:45.854370 | orchestrator | 2026-03-28 01:05:45.854380 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-28 01:05:45.854392 | orchestrator | Saturday 28 March 2026 01:05:32 +0000 (0:00:11.790) 0:01:18.438 ******** 2026-03-28 01:05:45.854404 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:05:45.854421 | orchestrator | 2026-03-28 01:05:45.854453 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-28 01:05:45.854499 | orchestrator | 2026-03-28 01:05:45.854517 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-28 01:05:45.854553 | orchestrator | Saturday 28 March 2026 01:05:43 +0000 (0:00:11.290) 0:01:29.728 ******** 2026-03-28 01:05:45.854570 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:05:45.854587 | orchestrator | 2026-03-28 01:05:45.854603 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:05:45.854620 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 01:05:45.854637 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:05:45.854654 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:05:45.854670 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:05:45.854687 | orchestrator | 2026-03-28 01:05:45.854703 | orchestrator | 2026-03-28 01:05:45.854719 | orchestrator | 2026-03-28 01:05:45.854736 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:05:45.854755 | orchestrator | Saturday 28 March 2026 01:05:44 +0000 (0:00:01.141) 0:01:30.869 ******** 2026-03-28 01:05:45.854773 | orchestrator | =============================================================================== 2026-03-28 01:05:45.854870 | orchestrator | Create admin user ------------------------------------------------------ 54.85s 2026-03-28 01:05:45.854889 | orchestrator | Restart ceph manager service ------------------------------------------- 24.22s 2026-03-28 01:05:45.854906 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.10s 2026-03-28 01:05:45.854928 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.77s 2026-03-28 01:05:45.854939 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.57s 2026-03-28 01:05:45.854949 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.25s 2026-03-28 01:05:45.854960 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.20s 2026-03-28 01:05:45.854971 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.20s 2026-03-28 01:05:45.854982 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.16s 2026-03-28 01:05:45.854993 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.10s 2026-03-28 01:05:45.855004 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.19s 2026-03-28 01:05:45.855015 | orchestrator | 2026-03-28 01:05:45 | INFO  | Task c36b1680-8a8a-4e8f-9443-3f7f2ba29c90 is in state SUCCESS 2026-03-28 01:05:45.855166 | orchestrator | 2026-03-28 01:05:45 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:45.855767 | orchestrator | 2026-03-28 01:05:45 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:45.856623 | orchestrator | 2026-03-28 01:05:45 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:45.857329 | orchestrator | 2026-03-28 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:48.886647 | orchestrator | 2026-03-28 01:05:48 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:48.887281 | orchestrator | 2026-03-28 01:05:48 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state STARTED 2026-03-28 01:05:48.888166 | orchestrator | 2026-03-28 01:05:48 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:48.889235 | orchestrator | 2026-03-28 01:05:48 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:48.889308 | orchestrator | 2026-03-28 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:51.927713 | orchestrator | 2026-03-28 01:05:51 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:51.927953 | orchestrator | 2026-03-28 01:05:51 | INFO  | Task c08d10ab-a42c-4dd0-900c-6c35ed4f279c is in state SUCCESS 2026-03-28 01:05:51.928982 | orchestrator | 2026-03-28 01:05:51 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:51.929916 | orchestrator | 2026-03-28 01:05:51 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:51.929950 | orchestrator | 2026-03-28 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:54.965940 | orchestrator | 2026-03-28 01:05:54 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:54.969756 | orchestrator | 2026-03-28 01:05:54 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:54.970553 | orchestrator | 2026-03-28 01:05:54 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:54.971326 | orchestrator | 2026-03-28 01:05:54 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:05:54.971374 | orchestrator | 2026-03-28 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:58.001330 | orchestrator | 2026-03-28 01:05:58 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:05:58.002182 | orchestrator | 2026-03-28 01:05:58 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:05:58.003395 | orchestrator | 2026-03-28 01:05:58 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:05:58.005175 | orchestrator | 2026-03-28 01:05:58 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:05:58.005265 | orchestrator | 2026-03-28 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:01.105534 | orchestrator | 2026-03-28 01:06:01 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:01.105809 | orchestrator | 2026-03-28 01:06:01 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:01.106734 | orchestrator | 2026-03-28 01:06:01 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:06:01.107484 | orchestrator | 2026-03-28 01:06:01 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:01.107533 | orchestrator | 2026-03-28 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:04.141893 | orchestrator | 2026-03-28 01:06:04 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:04.142204 | orchestrator | 2026-03-28 01:06:04 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:04.143165 | orchestrator | 2026-03-28 01:06:04 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:06:04.143840 | orchestrator | 2026-03-28 01:06:04 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:04.144521 | orchestrator | 2026-03-28 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:07.188840 | orchestrator | 2026-03-28 01:06:07 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:07.190736 | orchestrator | 2026-03-28 01:06:07 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:07.192548 | orchestrator | 2026-03-28 01:06:07 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:06:07.196157 | orchestrator | 2026-03-28 01:06:07 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:07.196897 | orchestrator | 2026-03-28 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:10.229876 | orchestrator | 2026-03-28 01:06:10 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:10.231887 | orchestrator | 2026-03-28 01:06:10 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:10.232503 | orchestrator | 2026-03-28 01:06:10 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:06:10.233402 | orchestrator | 2026-03-28 01:06:10 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:10.234387 | orchestrator | 2026-03-28 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:13.277744 | orchestrator | 2026-03-28 01:06:13 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:13.278714 | orchestrator | 2026-03-28 01:06:13 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:13.280310 | orchestrator | 2026-03-28 01:06:13 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state STARTED 2026-03-28 01:06:13.281731 | orchestrator | 2026-03-28 01:06:13 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:13.281772 | orchestrator | 2026-03-28 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:16.314404 | orchestrator | 2026-03-28 01:06:16 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:16.315104 | orchestrator | 2026-03-28 01:06:16 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:16.316972 | orchestrator | 2026-03-28 01:06:16 | INFO  | Task a245b4c1-8e6a-459c-ac33-d24df19a9e0a is in state SUCCESS 2026-03-28 01:06:16.319168 | orchestrator | 2026-03-28 01:06:16.319261 | orchestrator | 2026-03-28 01:06:16.319362 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-28 01:06:16.319386 | orchestrator | 2026-03-28 01:06:16.319405 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-28 01:06:16.319424 | orchestrator | Saturday 28 March 2026 01:04:05 +0000 (0:00:00.114) 0:00:00.114 ******** 2026-03-28 01:06:16.319444 | orchestrator | changed: [localhost] 2026-03-28 01:06:16.319463 | orchestrator | 2026-03-28 01:06:16.319481 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-28 01:06:16.319534 | orchestrator | Saturday 28 March 2026 01:04:06 +0000 (0:00:01.406) 0:00:01.521 ******** 2026-03-28 01:06:16.319553 | orchestrator | changed: [localhost] 2026-03-28 01:06:16.319571 | orchestrator | 2026-03-28 01:06:16.319590 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-28 01:06:16.319611 | orchestrator | Saturday 28 March 2026 01:04:58 +0000 (0:00:51.241) 0:00:52.762 ******** 2026-03-28 01:06:16.319630 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-03-28 01:06:16.319650 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-03-28 01:06:16.319664 | orchestrator | changed: [localhost] 2026-03-28 01:06:16.319676 | orchestrator | 2026-03-28 01:06:16.319689 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:06:16.319702 | orchestrator | 2026-03-28 01:06:16.319715 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:06:16.319728 | orchestrator | Saturday 28 March 2026 01:05:49 +0000 (0:00:51.865) 0:01:44.628 ******** 2026-03-28 01:06:16.319740 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:06:16.319753 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:06:16.319766 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:06:16.319897 | orchestrator | 2026-03-28 01:06:16.319923 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:06:16.319942 | orchestrator | Saturday 28 March 2026 01:05:50 +0000 (0:00:00.348) 0:01:44.976 ******** 2026-03-28 01:06:16.319962 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-28 01:06:16.320001 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-28 01:06:16.320022 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-28 01:06:16.320044 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-28 01:06:16.320065 | orchestrator | 2026-03-28 01:06:16.320084 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-28 01:06:16.320096 | orchestrator | skipping: no hosts matched 2026-03-28 01:06:16.320108 | orchestrator | 2026-03-28 01:06:16.320119 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:06:16.320130 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:06:16.320144 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:06:16.320157 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:06:16.320168 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:06:16.320179 | orchestrator | 2026-03-28 01:06:16.320190 | orchestrator | 2026-03-28 01:06:16.320201 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:06:16.320212 | orchestrator | Saturday 28 March 2026 01:05:50 +0000 (0:00:00.700) 0:01:45.677 ******** 2026-03-28 01:06:16.320223 | orchestrator | =============================================================================== 2026-03-28 01:06:16.320233 | orchestrator | Download ironic-agent kernel ------------------------------------------- 51.87s 2026-03-28 01:06:16.320243 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 51.24s 2026-03-28 01:06:16.320253 | orchestrator | Ensure the destination directory exists --------------------------------- 1.41s 2026-03-28 01:06:16.320263 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2026-03-28 01:06:16.320273 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-03-28 01:06:16.320282 | orchestrator | 2026-03-28 01:06:16.320293 | orchestrator | 2026-03-28 01:06:16.320302 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:06:16.320312 | orchestrator | 2026-03-28 01:06:16.320322 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:06:16.320331 | orchestrator | Saturday 28 March 2026 01:04:06 +0000 (0:00:00.319) 0:00:00.319 ******** 2026-03-28 01:06:16.320341 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:06:16.320351 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:06:16.320360 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:06:16.320370 | orchestrator | 2026-03-28 01:06:16.320380 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:06:16.320389 | orchestrator | Saturday 28 March 2026 01:04:06 +0000 (0:00:00.536) 0:00:00.855 ******** 2026-03-28 01:06:16.320399 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-28 01:06:16.320409 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-28 01:06:16.320418 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-28 01:06:16.320428 | orchestrator | 2026-03-28 01:06:16.320438 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-28 01:06:16.320448 | orchestrator | 2026-03-28 01:06:16.320457 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-28 01:06:16.320467 | orchestrator | Saturday 28 March 2026 01:04:07 +0000 (0:00:00.863) 0:00:01.719 ******** 2026-03-28 01:06:16.320509 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:06:16.320520 | orchestrator | 2026-03-28 01:06:16.320530 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-28 01:06:16.320540 | orchestrator | Saturday 28 March 2026 01:04:08 +0000 (0:00:00.745) 0:00:02.464 ******** 2026-03-28 01:06:16.320550 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-28 01:06:16.320560 | orchestrator | 2026-03-28 01:06:16.320569 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-28 01:06:16.320579 | orchestrator | Saturday 28 March 2026 01:04:12 +0000 (0:00:03.745) 0:00:06.210 ******** 2026-03-28 01:06:16.320589 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-28 01:06:16.320599 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-28 01:06:16.320609 | orchestrator | 2026-03-28 01:06:16.320619 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-28 01:06:16.320629 | orchestrator | Saturday 28 March 2026 01:04:19 +0000 (0:00:07.636) 0:00:13.846 ******** 2026-03-28 01:06:16.320639 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:06:16.320649 | orchestrator | 2026-03-28 01:06:16.320659 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-28 01:06:16.320669 | orchestrator | Saturday 28 March 2026 01:04:23 +0000 (0:00:03.414) 0:00:17.261 ******** 2026-03-28 01:06:16.320679 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:06:16.320689 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-28 01:06:16.320699 | orchestrator | 2026-03-28 01:06:16.320709 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-28 01:06:16.320719 | orchestrator | Saturday 28 March 2026 01:04:28 +0000 (0:00:04.633) 0:00:21.894 ******** 2026-03-28 01:06:16.320729 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:06:16.320738 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-28 01:06:16.320748 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-28 01:06:16.320764 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-28 01:06:16.320774 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-28 01:06:16.320784 | orchestrator | 2026-03-28 01:06:16.320861 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-28 01:06:16.320880 | orchestrator | Saturday 28 March 2026 01:04:44 +0000 (0:00:16.309) 0:00:38.204 ******** 2026-03-28 01:06:16.320897 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-28 01:06:16.320915 | orchestrator | 2026-03-28 01:06:16.320930 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-28 01:06:16.320949 | orchestrator | Saturday 28 March 2026 01:04:47 +0000 (0:00:03.491) 0:00:41.695 ******** 2026-03-28 01:06:16.320963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.320985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.321006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.321018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.321034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.321047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.321066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.321092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.321118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.321135 | orchestrator | 2026-03-28 01:06:16.321149 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-28 01:06:16.321164 | orchestrator | Saturday 28 March 2026 01:04:50 +0000 (0:00:02.277) 0:00:43.972 ******** 2026-03-28 01:06:16.321180 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-28 01:06:16.321195 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-28 01:06:16.321209 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-28 01:06:16.321223 | orchestrator | 2026-03-28 01:06:16.321238 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-28 01:06:16.321253 | orchestrator | Saturday 28 March 2026 01:04:52 +0000 (0:00:02.032) 0:00:46.005 ******** 2026-03-28 01:06:16.321269 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:16.321283 | orchestrator | 2026-03-28 01:06:16.321298 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-28 01:06:16.321313 | orchestrator | Saturday 28 March 2026 01:04:52 +0000 (0:00:00.113) 0:00:46.119 ******** 2026-03-28 01:06:16.321344 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:16.321361 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:16.321378 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:16.321394 | orchestrator | 2026-03-28 01:06:16.321410 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-28 01:06:16.321427 | orchestrator | Saturday 28 March 2026 01:04:52 +0000 (0:00:00.555) 0:00:46.674 ******** 2026-03-28 01:06:16.321453 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:06:16.321473 | orchestrator | 2026-03-28 01:06:16.321488 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-28 01:06:16.321505 | orchestrator | Saturday 28 March 2026 01:04:54 +0000 (0:00:01.387) 0:00:48.062 ******** 2026-03-28 01:06:16.321522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.321555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.321587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.321607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.321633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.321651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.321680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.321699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.321729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.321747 | orchestrator | 2026-03-28 01:06:16.321764 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-28 01:06:16.321779 | orchestrator | Saturday 28 March 2026 01:04:58 +0000 (0:00:04.495) 0:00:52.558 ******** 2026-03-28 01:06:16.321826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:06:16.321853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.321883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.321907 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:16.321926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:06:16.321956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.321974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.321991 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:16.322107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:06:16.322154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.322173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.322192 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:16.322211 | orchestrator | 2026-03-28 01:06:16.322230 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-28 01:06:16.322249 | orchestrator | Saturday 28 March 2026 01:05:00 +0000 (0:00:01.665) 0:00:54.223 ******** 2026-03-28 01:06:16.322279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:06:16.322298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.322316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.322356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:06:16.322378 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:16.322396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.322415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.322432 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:16.322876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:06:16.322963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.323017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.323033 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:16.323047 | orchestrator | 2026-03-28 01:06:16.323059 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-28 01:06:16.323072 | orchestrator | Saturday 28 March 2026 01:05:01 +0000 (0:00:01.139) 0:00:55.363 ******** 2026-03-28 01:06:16.323084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.323096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.323126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.323138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.323164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.323176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.323188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.323199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.323220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.323231 | orchestrator | 2026-03-28 01:06:16.323243 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-28 01:06:16.323254 | orchestrator | Saturday 28 March 2026 01:05:06 +0000 (0:00:05.057) 0:01:00.421 ******** 2026-03-28 01:06:16.323272 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:16.323283 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:16.323294 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:16.323305 | orchestrator | 2026-03-28 01:06:16.323316 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-28 01:06:16.323327 | orchestrator | Saturday 28 March 2026 01:05:10 +0000 (0:00:03.892) 0:01:04.313 ******** 2026-03-28 01:06:16.323338 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:06:16.323349 | orchestrator | 2026-03-28 01:06:16.323362 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-28 01:06:16.323375 | orchestrator | Saturday 28 March 2026 01:05:12 +0000 (0:00:02.474) 0:01:06.788 ******** 2026-03-28 01:06:16.323388 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:16.323400 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:16.323413 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:16.323426 | orchestrator | 2026-03-28 01:06:16.323439 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-28 01:06:16.323451 | orchestrator | Saturday 28 March 2026 01:05:14 +0000 (0:00:01.107) 0:01:07.896 ******** 2026-03-28 01:06:16.323470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.323486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.323517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.323550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.323565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.323583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.323598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.323611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.323625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.323638 | orchestrator | 2026-03-28 01:06:16.323659 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-28 01:06:16.323672 | orchestrator | Saturday 28 March 2026 01:05:26 +0000 (0:00:12.318) 0:01:20.214 ******** 2026-03-28 01:06:16.323699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:06:16.323727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.323748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.323767 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:16.323787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:06:16.323865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.323913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.323934 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:16.323956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:06:16.323986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.324005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:06:16.324026 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:16.324044 | orchestrator | 2026-03-28 01:06:16.324064 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-28 01:06:16.324084 | orchestrator | Saturday 28 March 2026 01:05:28 +0000 (0:00:01.659) 0:01:21.873 ******** 2026-03-28 01:06:16.324104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.324148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.324170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:06:16.324199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.324219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.324238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.324279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.324310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.324323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:16.324334 | orchestrator | 2026-03-28 01:06:16.324345 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-28 01:06:16.324357 | orchestrator | Saturday 28 March 2026 01:05:32 +0000 (0:00:04.568) 0:01:26.445 ******** 2026-03-28 01:06:16.324368 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:16.324379 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:16.324390 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:16.324401 | orchestrator | 2026-03-28 01:06:16.324418 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-28 01:06:16.324429 | orchestrator | Saturday 28 March 2026 01:05:33 +0000 (0:00:00.586) 0:01:27.031 ******** 2026-03-28 01:06:16.324441 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:16.324452 | orchestrator | 2026-03-28 01:06:16.324464 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-28 01:06:16.324475 | orchestrator | Saturday 28 March 2026 01:05:35 +0000 (0:00:02.533) 0:01:29.565 ******** 2026-03-28 01:06:16.324486 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:16.324497 | orchestrator | 2026-03-28 01:06:16.324508 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-28 01:06:16.324519 | orchestrator | Saturday 28 March 2026 01:05:38 +0000 (0:00:02.665) 0:01:32.231 ******** 2026-03-28 01:06:16.324530 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:16.324541 | orchestrator | 2026-03-28 01:06:16.324552 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-28 01:06:16.324563 | orchestrator | Saturday 28 March 2026 01:05:51 +0000 (0:00:13.073) 0:01:45.304 ******** 2026-03-28 01:06:16.324574 | orchestrator | 2026-03-28 01:06:16.324585 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-28 01:06:16.324596 | orchestrator | Saturday 28 March 2026 01:05:51 +0000 (0:00:00.144) 0:01:45.449 ******** 2026-03-28 01:06:16.324614 | orchestrator | 2026-03-28 01:06:16.324625 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-28 01:06:16.324636 | orchestrator | Saturday 28 March 2026 01:05:51 +0000 (0:00:00.094) 0:01:45.543 ******** 2026-03-28 01:06:16.324647 | orchestrator | 2026-03-28 01:06:16.324658 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-28 01:06:16.324669 | orchestrator | Saturday 28 March 2026 01:05:51 +0000 (0:00:00.080) 0:01:45.624 ******** 2026-03-28 01:06:16.324681 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:16.324692 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:16.324702 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:16.324713 | orchestrator | 2026-03-28 01:06:16.324724 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-28 01:06:16.324735 | orchestrator | Saturday 28 March 2026 01:06:00 +0000 (0:00:08.757) 0:01:54.384 ******** 2026-03-28 01:06:16.324746 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:16.324757 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:16.324771 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:16.324788 | orchestrator | 2026-03-28 01:06:16.324856 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-28 01:06:16.324873 | orchestrator | Saturday 28 March 2026 01:06:08 +0000 (0:00:08.434) 0:02:02.819 ******** 2026-03-28 01:06:16.324884 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:16.324895 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:16.324905 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:16.324916 | orchestrator | 2026-03-28 01:06:16.324927 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:06:16.324939 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:06:16.324957 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:06:16.324975 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:06:16.324994 | orchestrator | 2026-03-28 01:06:16.325013 | orchestrator | 2026-03-28 01:06:16.325034 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:06:16.325052 | orchestrator | Saturday 28 March 2026 01:06:14 +0000 (0:00:05.688) 0:02:08.508 ******** 2026-03-28 01:06:16.325082 | orchestrator | =============================================================================== 2026-03-28 01:06:16.325111 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.31s 2026-03-28 01:06:16.325134 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.07s 2026-03-28 01:06:16.325145 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 12.32s 2026-03-28 01:06:16.325156 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.76s 2026-03-28 01:06:16.325167 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.43s 2026-03-28 01:06:16.325178 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.64s 2026-03-28 01:06:16.325189 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.69s 2026-03-28 01:06:16.325200 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.06s 2026-03-28 01:06:16.325211 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.63s 2026-03-28 01:06:16.325222 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.57s 2026-03-28 01:06:16.325233 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.49s 2026-03-28 01:06:16.325243 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.89s 2026-03-28 01:06:16.325254 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.75s 2026-03-28 01:06:16.325277 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.49s 2026-03-28 01:06:16.325289 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.41s 2026-03-28 01:06:16.325300 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.67s 2026-03-28 01:06:16.325320 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.53s 2026-03-28 01:06:16.325348 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.47s 2026-03-28 01:06:16.325369 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.28s 2026-03-28 01:06:16.325391 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.03s 2026-03-28 01:06:16.325440 | orchestrator | 2026-03-28 01:06:16 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:16.325462 | orchestrator | 2026-03-28 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:19.349711 | orchestrator | 2026-03-28 01:06:19 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:19.350438 | orchestrator | 2026-03-28 01:06:19 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:19.351257 | orchestrator | 2026-03-28 01:06:19 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:19.352080 | orchestrator | 2026-03-28 01:06:19 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:19.352115 | orchestrator | 2026-03-28 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:22.388037 | orchestrator | 2026-03-28 01:06:22 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:22.389050 | orchestrator | 2026-03-28 01:06:22 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:22.390124 | orchestrator | 2026-03-28 01:06:22 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:22.390974 | orchestrator | 2026-03-28 01:06:22 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:22.391006 | orchestrator | 2026-03-28 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:25.428452 | orchestrator | 2026-03-28 01:06:25 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:25.428524 | orchestrator | 2026-03-28 01:06:25 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:25.428535 | orchestrator | 2026-03-28 01:06:25 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:25.428543 | orchestrator | 2026-03-28 01:06:25 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:25.428553 | orchestrator | 2026-03-28 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:28.460466 | orchestrator | 2026-03-28 01:06:28 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:28.462600 | orchestrator | 2026-03-28 01:06:28 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:28.463618 | orchestrator | 2026-03-28 01:06:28 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:28.465072 | orchestrator | 2026-03-28 01:06:28 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:28.465106 | orchestrator | 2026-03-28 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:31.516445 | orchestrator | 2026-03-28 01:06:31 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:31.516948 | orchestrator | 2026-03-28 01:06:31 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:31.517970 | orchestrator | 2026-03-28 01:06:31 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:31.518960 | orchestrator | 2026-03-28 01:06:31 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:31.518993 | orchestrator | 2026-03-28 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:34.553732 | orchestrator | 2026-03-28 01:06:34 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:34.554445 | orchestrator | 2026-03-28 01:06:34 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:34.555360 | orchestrator | 2026-03-28 01:06:34 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:34.556832 | orchestrator | 2026-03-28 01:06:34 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:34.556902 | orchestrator | 2026-03-28 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:37.594833 | orchestrator | 2026-03-28 01:06:37 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:37.595255 | orchestrator | 2026-03-28 01:06:37 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:37.596288 | orchestrator | 2026-03-28 01:06:37 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:37.597163 | orchestrator | 2026-03-28 01:06:37 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:37.597200 | orchestrator | 2026-03-28 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:40.709635 | orchestrator | 2026-03-28 01:06:40 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:40.709753 | orchestrator | 2026-03-28 01:06:40 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:40.710794 | orchestrator | 2026-03-28 01:06:40 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:40.711536 | orchestrator | 2026-03-28 01:06:40 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:40.711556 | orchestrator | 2026-03-28 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:43.744481 | orchestrator | 2026-03-28 01:06:43 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:43.744686 | orchestrator | 2026-03-28 01:06:43 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:43.745597 | orchestrator | 2026-03-28 01:06:43 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:43.746295 | orchestrator | 2026-03-28 01:06:43 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:43.746328 | orchestrator | 2026-03-28 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:46.775597 | orchestrator | 2026-03-28 01:06:46 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:46.775684 | orchestrator | 2026-03-28 01:06:46 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:46.776053 | orchestrator | 2026-03-28 01:06:46 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:46.778542 | orchestrator | 2026-03-28 01:06:46 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:46.778595 | orchestrator | 2026-03-28 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:49.809440 | orchestrator | 2026-03-28 01:06:49 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:49.811065 | orchestrator | 2026-03-28 01:06:49 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:49.812750 | orchestrator | 2026-03-28 01:06:49 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:49.814393 | orchestrator | 2026-03-28 01:06:49 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:49.814519 | orchestrator | 2026-03-28 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:52.854346 | orchestrator | 2026-03-28 01:06:52 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:52.855337 | orchestrator | 2026-03-28 01:06:52 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:52.856390 | orchestrator | 2026-03-28 01:06:52 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:52.857508 | orchestrator | 2026-03-28 01:06:52 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:52.857643 | orchestrator | 2026-03-28 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:55.888280 | orchestrator | 2026-03-28 01:06:55 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:55.888443 | orchestrator | 2026-03-28 01:06:55 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:55.890257 | orchestrator | 2026-03-28 01:06:55 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:55.890331 | orchestrator | 2026-03-28 01:06:55 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:55.890349 | orchestrator | 2026-03-28 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:58.924025 | orchestrator | 2026-03-28 01:06:58 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:06:58.925177 | orchestrator | 2026-03-28 01:06:58 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:06:58.927157 | orchestrator | 2026-03-28 01:06:58 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:06:58.928192 | orchestrator | 2026-03-28 01:06:58 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:06:58.928230 | orchestrator | 2026-03-28 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:01.960969 | orchestrator | 2026-03-28 01:07:01 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:07:01.961972 | orchestrator | 2026-03-28 01:07:01 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:01.965507 | orchestrator | 2026-03-28 01:07:01 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:01.965683 | orchestrator | 2026-03-28 01:07:01 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:07:01.965703 | orchestrator | 2026-03-28 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:05.000042 | orchestrator | 2026-03-28 01:07:05 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:07:05.001242 | orchestrator | 2026-03-28 01:07:05 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:05.002924 | orchestrator | 2026-03-28 01:07:05 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:05.004740 | orchestrator | 2026-03-28 01:07:05 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:07:05.004805 | orchestrator | 2026-03-28 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:08.049656 | orchestrator | 2026-03-28 01:07:08 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:07:08.050214 | orchestrator | 2026-03-28 01:07:08 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:08.051608 | orchestrator | 2026-03-28 01:07:08 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:08.052481 | orchestrator | 2026-03-28 01:07:08 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:07:08.052527 | orchestrator | 2026-03-28 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:11.111285 | orchestrator | 2026-03-28 01:07:11 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:07:11.114361 | orchestrator | 2026-03-28 01:07:11 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:11.117284 | orchestrator | 2026-03-28 01:07:11 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:11.119833 | orchestrator | 2026-03-28 01:07:11 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:07:11.120340 | orchestrator | 2026-03-28 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:14.165880 | orchestrator | 2026-03-28 01:07:14 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:07:14.166838 | orchestrator | 2026-03-28 01:07:14 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:14.168019 | orchestrator | 2026-03-28 01:07:14 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:14.170130 | orchestrator | 2026-03-28 01:07:14 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:07:14.170181 | orchestrator | 2026-03-28 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:17.212497 | orchestrator | 2026-03-28 01:07:17 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:07:17.215405 | orchestrator | 2026-03-28 01:07:17 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:17.220538 | orchestrator | 2026-03-28 01:07:17 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:17.223584 | orchestrator | 2026-03-28 01:07:17 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:07:17.224552 | orchestrator | 2026-03-28 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:20.266366 | orchestrator | 2026-03-28 01:07:20 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:07:20.266785 | orchestrator | 2026-03-28 01:07:20 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:20.268335 | orchestrator | 2026-03-28 01:07:20 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:20.269425 | orchestrator | 2026-03-28 01:07:20 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:07:20.269539 | orchestrator | 2026-03-28 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:23.313463 | orchestrator | 2026-03-28 01:07:23 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:07:23.314862 | orchestrator | 2026-03-28 01:07:23 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:23.316242 | orchestrator | 2026-03-28 01:07:23 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:23.318287 | orchestrator | 2026-03-28 01:07:23 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:07:23.318418 | orchestrator | 2026-03-28 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:26.366230 | orchestrator | 2026-03-28 01:07:26 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state STARTED 2026-03-28 01:07:26.366338 | orchestrator | 2026-03-28 01:07:26 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:26.366445 | orchestrator | 2026-03-28 01:07:26 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:26.367753 | orchestrator | 2026-03-28 01:07:26 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state STARTED 2026-03-28 01:07:26.367814 | orchestrator | 2026-03-28 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:29.414228 | orchestrator | 2026-03-28 01:07:29 | INFO  | Task fc64854e-0cd6-4890-833b-da25dc0e6085 is in state SUCCESS 2026-03-28 01:07:29.415043 | orchestrator | 2026-03-28 01:07:29.415104 | orchestrator | 2026-03-28 01:07:29.415116 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:07:29.415125 | orchestrator | 2026-03-28 01:07:29.415133 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:07:29.415142 | orchestrator | Saturday 28 March 2026 01:04:06 +0000 (0:00:00.453) 0:00:00.453 ******** 2026-03-28 01:07:29.415149 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:07:29.415157 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:07:29.415165 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:07:29.415173 | orchestrator | 2026-03-28 01:07:29.415181 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:07:29.415188 | orchestrator | Saturday 28 March 2026 01:04:07 +0000 (0:00:00.604) 0:00:01.058 ******** 2026-03-28 01:07:29.415196 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-28 01:07:29.415205 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-28 01:07:29.415213 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-28 01:07:29.415221 | orchestrator | 2026-03-28 01:07:29.415229 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-28 01:07:29.415236 | orchestrator | 2026-03-28 01:07:29.415243 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 01:07:29.415252 | orchestrator | Saturday 28 March 2026 01:04:08 +0000 (0:00:00.944) 0:00:02.002 ******** 2026-03-28 01:07:29.415260 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:07:29.415270 | orchestrator | 2026-03-28 01:07:29.415278 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-28 01:07:29.415286 | orchestrator | Saturday 28 March 2026 01:04:08 +0000 (0:00:00.853) 0:00:02.855 ******** 2026-03-28 01:07:29.415294 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-28 01:07:29.415302 | orchestrator | 2026-03-28 01:07:29.415310 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-28 01:07:29.415409 | orchestrator | Saturday 28 March 2026 01:04:12 +0000 (0:00:03.771) 0:00:06.626 ******** 2026-03-28 01:07:29.415414 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-28 01:07:29.415436 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-28 01:07:29.415441 | orchestrator | 2026-03-28 01:07:29.415446 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-28 01:07:29.415470 | orchestrator | Saturday 28 March 2026 01:04:19 +0000 (0:00:07.176) 0:00:13.803 ******** 2026-03-28 01:07:29.415475 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-28 01:07:29.415480 | orchestrator | 2026-03-28 01:07:29.415484 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-28 01:07:29.415489 | orchestrator | Saturday 28 March 2026 01:04:23 +0000 (0:00:03.570) 0:00:17.373 ******** 2026-03-28 01:07:29.415494 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:07:29.415499 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-28 01:07:29.415503 | orchestrator | 2026-03-28 01:07:29.415508 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-28 01:07:29.415513 | orchestrator | Saturday 28 March 2026 01:04:28 +0000 (0:00:04.664) 0:00:22.037 ******** 2026-03-28 01:07:29.415518 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:07:29.415522 | orchestrator | 2026-03-28 01:07:29.415527 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-28 01:07:29.415532 | orchestrator | Saturday 28 March 2026 01:04:32 +0000 (0:00:04.193) 0:00:26.231 ******** 2026-03-28 01:07:29.415547 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-28 01:07:29.415552 | orchestrator | 2026-03-28 01:07:29.415557 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-28 01:07:29.415561 | orchestrator | Saturday 28 March 2026 01:04:36 +0000 (0:00:03.999) 0:00:30.230 ******** 2026-03-28 01:07:29.415569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.415590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.415597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.415609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415746 | orchestrator | 2026-03-28 01:07:29.415751 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-28 01:07:29.415757 | orchestrator | Saturday 28 March 2026 01:04:39 +0000 (0:00:02.758) 0:00:32.989 ******** 2026-03-28 01:07:29.415766 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:29.415772 | orchestrator | 2026-03-28 01:07:29.415777 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-28 01:07:29.415783 | orchestrator | Saturday 28 March 2026 01:04:39 +0000 (0:00:00.133) 0:00:33.122 ******** 2026-03-28 01:07:29.415788 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:29.415793 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:29.415798 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:29.415804 | orchestrator | 2026-03-28 01:07:29.415809 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 01:07:29.415814 | orchestrator | Saturday 28 March 2026 01:04:39 +0000 (0:00:00.303) 0:00:33.426 ******** 2026-03-28 01:07:29.415820 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:07:29.415826 | orchestrator | 2026-03-28 01:07:29.415831 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-28 01:07:29.415836 | orchestrator | Saturday 28 March 2026 01:04:40 +0000 (0:00:00.818) 0:00:34.245 ******** 2026-03-28 01:07:29.415845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.415852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.415870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.415876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.415987 | orchestrator | 2026-03-28 01:07:29.415993 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-28 01:07:29.415998 | orchestrator | Saturday 28 March 2026 01:04:46 +0000 (0:00:06.175) 0:00:40.420 ******** 2026-03-28 01:07:29.416004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.416019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:07:29.416027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416062 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:29.416070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.416240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:07:29.416250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416273 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:29.416278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.416292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:07:29.416297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416321 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:29.416325 | orchestrator | 2026-03-28 01:07:29.416330 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-28 01:07:29.416342 | orchestrator | Saturday 28 March 2026 01:04:47 +0000 (0:00:01.326) 0:00:41.746 ******** 2026-03-28 01:07:29.416350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.416362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:07:29.416370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.416397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:07:29.416405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416425 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:29.416436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416444 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:29.416449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.416457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:07:29.416463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.416489 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:29.416494 | orchestrator | 2026-03-28 01:07:29.416499 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-28 01:07:29.416504 | orchestrator | Saturday 28 March 2026 01:04:49 +0000 (0:00:01.930) 0:00:43.677 ******** 2026-03-28 01:07:29.416509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.416517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.416523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.416528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.416663 | orchestrator | 2026-03-28 01:07:29.416671 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-28 01:07:29.416678 | orchestrator | Saturday 28 March 2026 01:04:57 +0000 (0:00:07.535) 0:00:51.212 ******** 2026-03-28 01:07:29.416686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.417632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.417672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.417681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417836 | orchestrator | 2026-03-28 01:07:29.417844 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-28 01:07:29.417849 | orchestrator | Saturday 28 March 2026 01:05:24 +0000 (0:00:27.509) 0:01:18.722 ******** 2026-03-28 01:07:29.417854 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-28 01:07:29.417860 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-28 01:07:29.417864 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-28 01:07:29.417869 | orchestrator | 2026-03-28 01:07:29.417874 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-28 01:07:29.417878 | orchestrator | Saturday 28 March 2026 01:05:31 +0000 (0:00:06.543) 0:01:25.265 ******** 2026-03-28 01:07:29.417883 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-28 01:07:29.417888 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-28 01:07:29.417892 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-28 01:07:29.417897 | orchestrator | 2026-03-28 01:07:29.417902 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-28 01:07:29.417907 | orchestrator | Saturday 28 March 2026 01:05:36 +0000 (0:00:04.815) 0:01:30.081 ******** 2026-03-28 01:07:29.417916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.417921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.417929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.417937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.417947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.417955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.417966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.417976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.417984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.417989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.417998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418069 | orchestrator | 2026-03-28 01:07:29.418074 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-28 01:07:29.418079 | orchestrator | Saturday 28 March 2026 01:05:39 +0000 (0:00:03.508) 0:01:33.589 ******** 2026-03-28 01:07:29.418090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.418099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.418104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.418111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418203 | orchestrator | 2026-03-28 01:07:29.418209 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 01:07:29.418217 | orchestrator | Saturday 28 March 2026 01:05:43 +0000 (0:00:03.734) 0:01:37.323 ******** 2026-03-28 01:07:29.418225 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:29.418233 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:29.418240 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:29.418248 | orchestrator | 2026-03-28 01:07:29.418255 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-28 01:07:29.418267 | orchestrator | Saturday 28 March 2026 01:05:44 +0000 (0:00:00.704) 0:01:38.027 ******** 2026-03-28 01:07:29.418281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.418290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:07:29.418299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418342 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:29.418355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.418364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:07:29.418372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418409 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:29.418422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:07:29.418431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:07:29.418439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:07:29.418501 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:29.418507 | orchestrator | 2026-03-28 01:07:29.418512 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-28 01:07:29.418518 | orchestrator | Saturday 28 March 2026 01:05:45 +0000 (0:00:01.491) 0:01:39.519 ******** 2026-03-28 01:07:29.418528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.418534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.418540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:07:29.418568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:29.418674 | orchestrator | 2026-03-28 01:07:29.418681 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 01:07:29.418688 | orchestrator | Saturday 28 March 2026 01:05:51 +0000 (0:00:06.295) 0:01:45.816 ******** 2026-03-28 01:07:29.418696 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:29.418703 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:29.418711 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:29.418718 | orchestrator | 2026-03-28 01:07:29.418748 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-28 01:07:29.418754 | orchestrator | Saturday 28 March 2026 01:05:52 +0000 (0:00:00.894) 0:01:46.711 ******** 2026-03-28 01:07:29.418760 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-28 01:07:29.418764 | orchestrator | 2026-03-28 01:07:29.418769 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-28 01:07:29.418773 | orchestrator | Saturday 28 March 2026 01:05:55 +0000 (0:00:02.715) 0:01:49.426 ******** 2026-03-28 01:07:29.418778 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:07:29.418783 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-28 01:07:29.418788 | orchestrator | 2026-03-28 01:07:29.418792 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-28 01:07:29.418801 | orchestrator | Saturday 28 March 2026 01:05:58 +0000 (0:00:02.823) 0:01:52.250 ******** 2026-03-28 01:07:29.418806 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:29.418810 | orchestrator | 2026-03-28 01:07:29.418815 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-28 01:07:29.418820 | orchestrator | Saturday 28 March 2026 01:06:16 +0000 (0:00:17.978) 0:02:10.228 ******** 2026-03-28 01:07:29.418825 | orchestrator | 2026-03-28 01:07:29.418829 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-28 01:07:29.418834 | orchestrator | Saturday 28 March 2026 01:06:16 +0000 (0:00:00.078) 0:02:10.307 ******** 2026-03-28 01:07:29.418839 | orchestrator | 2026-03-28 01:07:29.418843 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-28 01:07:29.418848 | orchestrator | Saturday 28 March 2026 01:06:16 +0000 (0:00:00.083) 0:02:10.390 ******** 2026-03-28 01:07:29.418853 | orchestrator | 2026-03-28 01:07:29.418857 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-28 01:07:29.418862 | orchestrator | Saturday 28 March 2026 01:06:16 +0000 (0:00:00.082) 0:02:10.473 ******** 2026-03-28 01:07:29.418867 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:29.418871 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:29.418876 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:29.418881 | orchestrator | 2026-03-28 01:07:29.418885 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-28 01:07:29.418890 | orchestrator | Saturday 28 March 2026 01:06:27 +0000 (0:00:10.771) 0:02:21.245 ******** 2026-03-28 01:07:29.418895 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:29.418899 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:29.418904 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:29.418913 | orchestrator | 2026-03-28 01:07:29.418918 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-28 01:07:29.418922 | orchestrator | Saturday 28 March 2026 01:06:40 +0000 (0:00:13.164) 0:02:34.409 ******** 2026-03-28 01:07:29.418927 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:29.418932 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:29.418936 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:29.418941 | orchestrator | 2026-03-28 01:07:29.418945 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-28 01:07:29.418950 | orchestrator | Saturday 28 March 2026 01:06:55 +0000 (0:00:15.068) 0:02:49.478 ******** 2026-03-28 01:07:29.418955 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:29.418959 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:29.418964 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:29.418968 | orchestrator | 2026-03-28 01:07:29.418973 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-28 01:07:29.418978 | orchestrator | Saturday 28 March 2026 01:07:06 +0000 (0:00:10.866) 0:03:00.344 ******** 2026-03-28 01:07:29.418982 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:29.418987 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:29.418992 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:29.418996 | orchestrator | 2026-03-28 01:07:29.419001 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-28 01:07:29.419006 | orchestrator | Saturday 28 March 2026 01:07:12 +0000 (0:00:06.226) 0:03:06.571 ******** 2026-03-28 01:07:29.419010 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:29.419015 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:29.419020 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:29.419024 | orchestrator | 2026-03-28 01:07:29.419029 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-28 01:07:29.419033 | orchestrator | Saturday 28 March 2026 01:07:19 +0000 (0:00:07.197) 0:03:13.768 ******** 2026-03-28 01:07:29.419038 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:29.419042 | orchestrator | 2026-03-28 01:07:29.419047 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:07:29.419056 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:07:29.419062 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:07:29.419067 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:07:29.419072 | orchestrator | 2026-03-28 01:07:29.419076 | orchestrator | 2026-03-28 01:07:29.419081 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:07:29.419086 | orchestrator | Saturday 28 March 2026 01:07:26 +0000 (0:00:06.768) 0:03:20.537 ******** 2026-03-28 01:07:29.419090 | orchestrator | =============================================================================== 2026-03-28 01:07:29.419095 | orchestrator | designate : Copying over designate.conf -------------------------------- 27.51s 2026-03-28 01:07:29.419100 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.98s 2026-03-28 01:07:29.419104 | orchestrator | designate : Restart designate-central container ------------------------ 15.07s 2026-03-28 01:07:29.419109 | orchestrator | designate : Restart designate-api container ---------------------------- 13.17s 2026-03-28 01:07:29.419114 | orchestrator | designate : Restart designate-producer container ----------------------- 10.87s 2026-03-28 01:07:29.419118 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.77s 2026-03-28 01:07:29.419123 | orchestrator | designate : Copying over config.json files for services ----------------- 7.54s 2026-03-28 01:07:29.419127 | orchestrator | designate : Restart designate-worker container -------------------------- 7.20s 2026-03-28 01:07:29.419136 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.18s 2026-03-28 01:07:29.419140 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.77s 2026-03-28 01:07:29.419210 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.54s 2026-03-28 01:07:29.419216 | orchestrator | designate : Check designate containers ---------------------------------- 6.30s 2026-03-28 01:07:29.419220 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.23s 2026-03-28 01:07:29.419225 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.18s 2026-03-28 01:07:29.419230 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.82s 2026-03-28 01:07:29.419234 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.66s 2026-03-28 01:07:29.419239 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.19s 2026-03-28 01:07:29.419244 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.00s 2026-03-28 01:07:29.419249 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.77s 2026-03-28 01:07:29.419253 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.73s 2026-03-28 01:07:29.419258 | orchestrator | 2026-03-28 01:07:29 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:29.421816 | orchestrator | 2026-03-28 01:07:29 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:29.423952 | orchestrator | 2026-03-28 01:07:29 | INFO  | Task 597f75eb-5f18-4546-ba43-bf56c86788ec is in state STARTED 2026-03-28 01:07:29.425198 | orchestrator | 2026-03-28 01:07:29 | INFO  | Task 48f6bac4-5fbc-408d-9df4-f20d0e1f85e8 is in state SUCCESS 2026-03-28 01:07:29.425982 | orchestrator | 2026-03-28 01:07:29.426007 | orchestrator | 2026-03-28 01:07:29.426012 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:07:29.426040 | orchestrator | 2026-03-28 01:07:29.426045 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:07:29.426050 | orchestrator | Saturday 28 March 2026 01:06:02 +0000 (0:00:00.435) 0:00:00.435 ******** 2026-03-28 01:07:29.426054 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:07:29.426060 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:07:29.426064 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:07:29.426067 | orchestrator | 2026-03-28 01:07:29.426071 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:07:29.426076 | orchestrator | Saturday 28 March 2026 01:06:02 +0000 (0:00:00.549) 0:00:00.984 ******** 2026-03-28 01:07:29.426081 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-28 01:07:29.426085 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-28 01:07:29.426089 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-28 01:07:29.426093 | orchestrator | 2026-03-28 01:07:29.426097 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-28 01:07:29.426101 | orchestrator | 2026-03-28 01:07:29.426104 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-28 01:07:29.426108 | orchestrator | Saturday 28 March 2026 01:06:03 +0000 (0:00:01.139) 0:00:02.124 ******** 2026-03-28 01:07:29.426112 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:07:29.426146 | orchestrator | 2026-03-28 01:07:29.426151 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-28 01:07:29.426154 | orchestrator | Saturday 28 March 2026 01:06:04 +0000 (0:00:00.848) 0:00:02.972 ******** 2026-03-28 01:07:29.426158 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-28 01:07:29.426162 | orchestrator | 2026-03-28 01:07:29.426166 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-28 01:07:29.426194 | orchestrator | Saturday 28 March 2026 01:06:08 +0000 (0:00:04.097) 0:00:07.069 ******** 2026-03-28 01:07:29.426199 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-28 01:07:29.426203 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-28 01:07:29.426207 | orchestrator | 2026-03-28 01:07:29.426211 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-28 01:07:29.426215 | orchestrator | Saturday 28 March 2026 01:06:15 +0000 (0:00:06.383) 0:00:13.452 ******** 2026-03-28 01:07:29.426219 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:07:29.426223 | orchestrator | 2026-03-28 01:07:29.426227 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-28 01:07:29.426230 | orchestrator | Saturday 28 March 2026 01:06:18 +0000 (0:00:03.591) 0:00:17.044 ******** 2026-03-28 01:07:29.426234 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:07:29.426238 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-28 01:07:29.426242 | orchestrator | 2026-03-28 01:07:29.426246 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-28 01:07:29.426250 | orchestrator | Saturday 28 March 2026 01:06:23 +0000 (0:00:04.570) 0:00:21.615 ******** 2026-03-28 01:07:29.426254 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:07:29.426258 | orchestrator | 2026-03-28 01:07:29.426261 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-28 01:07:29.426265 | orchestrator | Saturday 28 March 2026 01:06:27 +0000 (0:00:03.756) 0:00:25.371 ******** 2026-03-28 01:07:29.426269 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-28 01:07:29.426273 | orchestrator | 2026-03-28 01:07:29.426277 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-28 01:07:29.426281 | orchestrator | Saturday 28 March 2026 01:06:31 +0000 (0:00:04.499) 0:00:29.871 ******** 2026-03-28 01:07:29.426285 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:29.426288 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:29.426292 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:29.426296 | orchestrator | 2026-03-28 01:07:29.426300 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-28 01:07:29.426304 | orchestrator | Saturday 28 March 2026 01:06:32 +0000 (0:00:00.610) 0:00:30.482 ******** 2026-03-28 01:07:29.426311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426344 | orchestrator | 2026-03-28 01:07:29.426348 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-28 01:07:29.426352 | orchestrator | Saturday 28 March 2026 01:06:33 +0000 (0:00:01.092) 0:00:31.574 ******** 2026-03-28 01:07:29.426356 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:29.426359 | orchestrator | 2026-03-28 01:07:29.426363 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-28 01:07:29.426367 | orchestrator | Saturday 28 March 2026 01:06:33 +0000 (0:00:00.257) 0:00:31.831 ******** 2026-03-28 01:07:29.426371 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:29.426375 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:29.426378 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:29.426382 | orchestrator | 2026-03-28 01:07:29.426386 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-28 01:07:29.426390 | orchestrator | Saturday 28 March 2026 01:06:34 +0000 (0:00:01.138) 0:00:32.970 ******** 2026-03-28 01:07:29.426394 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:07:29.426398 | orchestrator | 2026-03-28 01:07:29.426401 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-28 01:07:29.426405 | orchestrator | Saturday 28 March 2026 01:06:35 +0000 (0:00:00.840) 0:00:33.810 ******** 2026-03-28 01:07:29.426409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426435 | orchestrator | 2026-03-28 01:07:29.426439 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-28 01:07:29.426446 | orchestrator | Saturday 28 March 2026 01:06:37 +0000 (0:00:01.890) 0:00:35.701 ******** 2026-03-28 01:07:29.426450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:07:29.426454 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:29.426458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:07:29.426462 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:29.426469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:07:29.426477 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:29.426481 | orchestrator | 2026-03-28 01:07:29.426485 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-28 01:07:29.426488 | orchestrator | Saturday 28 March 2026 01:06:38 +0000 (0:00:01.203) 0:00:36.904 ******** 2026-03-28 01:07:29.426492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:07:29.426496 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:29.426503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:07:29.426507 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:29.426511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:07:29.426515 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:29.426519 | orchestrator | 2026-03-28 01:07:29.426523 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-28 01:07:29.426526 | orchestrator | Saturday 28 March 2026 01:06:39 +0000 (0:00:01.296) 0:00:38.200 ******** 2026-03-28 01:07:29.426538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426553 | orchestrator | 2026-03-28 01:07:29.426557 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-28 01:07:29.426561 | orchestrator | Saturday 28 March 2026 01:06:42 +0000 (0:00:02.863) 0:00:41.064 ******** 2026-03-28 01:07:29.426565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426584 | orchestrator | 2026-03-28 01:07:29.426588 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-28 01:07:29.426593 | orchestrator | Saturday 28 March 2026 01:06:47 +0000 (0:00:05.155) 0:00:46.219 ******** 2026-03-28 01:07:29.426597 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-28 01:07:29.426602 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-28 01:07:29.426606 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-28 01:07:29.426611 | orchestrator | 2026-03-28 01:07:29.426617 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-28 01:07:29.426622 | orchestrator | Saturday 28 March 2026 01:06:50 +0000 (0:00:02.153) 0:00:48.372 ******** 2026-03-28 01:07:29.426626 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:29.426631 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:29.426635 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:29.426639 | orchestrator | 2026-03-28 01:07:29.426644 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-28 01:07:29.426648 | orchestrator | Saturday 28 March 2026 01:06:52 +0000 (0:00:02.026) 0:00:50.399 ******** 2026-03-28 01:07:29.426653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:07:29.426661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:07:29.426666 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:29.426670 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:29.426679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:07:29.426684 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:29.426688 | orchestrator | 2026-03-28 01:07:29.426692 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-28 01:07:29.426697 | orchestrator | Saturday 28 March 2026 01:06:52 +0000 (0:00:00.759) 0:00:51.158 ******** 2026-03-28 01:07:29.426713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:29.426765 | orchestrator | 2026-03-28 01:07:29.426769 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-28 01:07:29.426774 | orchestrator | Saturday 28 March 2026 01:06:54 +0000 (0:00:01.991) 0:00:53.150 ******** 2026-03-28 01:07:29.426778 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:29.426783 | orchestrator | 2026-03-28 01:07:29.426787 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-28 01:07:29.426791 | orchestrator | Saturday 28 March 2026 01:06:59 +0000 (0:00:04.095) 0:00:57.245 ******** 2026-03-28 01:07:29.426796 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:29.426800 | orchestrator | 2026-03-28 01:07:29.426805 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-28 01:07:29.426809 | orchestrator | Saturday 28 March 2026 01:07:02 +0000 (0:00:02.990) 0:01:00.237 ******** 2026-03-28 01:07:29.426816 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:29.426821 | orchestrator | 2026-03-28 01:07:29.426825 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-28 01:07:29.426830 | orchestrator | Saturday 28 March 2026 01:07:17 +0000 (0:00:15.952) 0:01:16.189 ******** 2026-03-28 01:07:29.426834 | orchestrator | 2026-03-28 01:07:29.426838 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-28 01:07:29.426842 | orchestrator | Saturday 28 March 2026 01:07:18 +0000 (0:00:00.067) 0:01:16.257 ******** 2026-03-28 01:07:29.426874 | orchestrator | 2026-03-28 01:07:29.426879 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-28 01:07:29.426883 | orchestrator | Saturday 28 March 2026 01:07:18 +0000 (0:00:00.075) 0:01:16.333 ******** 2026-03-28 01:07:29.426888 | orchestrator | 2026-03-28 01:07:29.426892 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-28 01:07:29.426896 | orchestrator | Saturday 28 March 2026 01:07:18 +0000 (0:00:00.072) 0:01:16.405 ******** 2026-03-28 01:07:29.426901 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:29.426905 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:29.426910 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:29.426914 | orchestrator | 2026-03-28 01:07:29.426919 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:07:29.426924 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:07:29.426929 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:07:29.426934 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:07:29.426939 | orchestrator | 2026-03-28 01:07:29.426948 | orchestrator | 2026-03-28 01:07:29.426955 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:07:29.426959 | orchestrator | Saturday 28 March 2026 01:07:28 +0000 (0:00:10.476) 0:01:26.881 ******** 2026-03-28 01:07:29.426963 | orchestrator | =============================================================================== 2026-03-28 01:07:29.426967 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.95s 2026-03-28 01:07:29.426971 | orchestrator | placement : Restart placement-api container ---------------------------- 10.48s 2026-03-28 01:07:29.426975 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.38s 2026-03-28 01:07:29.426978 | orchestrator | placement : Copying over placement.conf --------------------------------- 5.16s 2026-03-28 01:07:29.426982 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.57s 2026-03-28 01:07:29.426986 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.50s 2026-03-28 01:07:29.426990 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.10s 2026-03-28 01:07:29.426994 | orchestrator | placement : Creating placement databases -------------------------------- 4.09s 2026-03-28 01:07:29.426997 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.76s 2026-03-28 01:07:29.427001 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.59s 2026-03-28 01:07:29.427005 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.99s 2026-03-28 01:07:29.427009 | orchestrator | placement : Copying over config.json files for services ----------------- 2.86s 2026-03-28 01:07:29.427012 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.15s 2026-03-28 01:07:29.427016 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.03s 2026-03-28 01:07:29.427020 | orchestrator | placement : Check placement containers ---------------------------------- 1.99s 2026-03-28 01:07:29.427024 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.89s 2026-03-28 01:07:29.427027 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.30s 2026-03-28 01:07:29.427031 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.20s 2026-03-28 01:07:29.427035 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.14s 2026-03-28 01:07:29.427039 | orchestrator | placement : Set placement policy file ----------------------------------- 1.14s 2026-03-28 01:07:29.427042 | orchestrator | 2026-03-28 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:32.475848 | orchestrator | 2026-03-28 01:07:32 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:32.476290 | orchestrator | 2026-03-28 01:07:32 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:32.476963 | orchestrator | 2026-03-28 01:07:32 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:07:32.477973 | orchestrator | 2026-03-28 01:07:32 | INFO  | Task 597f75eb-5f18-4546-ba43-bf56c86788ec is in state STARTED 2026-03-28 01:07:32.478718 | orchestrator | 2026-03-28 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:35.517701 | orchestrator | 2026-03-28 01:07:35 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:35.518524 | orchestrator | 2026-03-28 01:07:35 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:35.519632 | orchestrator | 2026-03-28 01:07:35 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:07:35.521812 | orchestrator | 2026-03-28 01:07:35 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:07:35.522864 | orchestrator | 2026-03-28 01:07:35 | INFO  | Task 597f75eb-5f18-4546-ba43-bf56c86788ec is in state SUCCESS 2026-03-28 01:07:35.523020 | orchestrator | 2026-03-28 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:38.581811 | orchestrator | 2026-03-28 01:07:38 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:38.581883 | orchestrator | 2026-03-28 01:07:38 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:38.581890 | orchestrator | 2026-03-28 01:07:38 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:07:38.581896 | orchestrator | 2026-03-28 01:07:38 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:07:38.581902 | orchestrator | 2026-03-28 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:41.628251 | orchestrator | 2026-03-28 01:07:41 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:41.630076 | orchestrator | 2026-03-28 01:07:41 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:41.631811 | orchestrator | 2026-03-28 01:07:41 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:07:41.633531 | orchestrator | 2026-03-28 01:07:41 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:07:41.633613 | orchestrator | 2026-03-28 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:44.679135 | orchestrator | 2026-03-28 01:07:44 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:44.680461 | orchestrator | 2026-03-28 01:07:44 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:44.682592 | orchestrator | 2026-03-28 01:07:44 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:07:44.684693 | orchestrator | 2026-03-28 01:07:44 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:07:44.684773 | orchestrator | 2026-03-28 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:47.736316 | orchestrator | 2026-03-28 01:07:47 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:47.738753 | orchestrator | 2026-03-28 01:07:47 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:47.739856 | orchestrator | 2026-03-28 01:07:47 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:07:47.741927 | orchestrator | 2026-03-28 01:07:47 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:07:47.742007 | orchestrator | 2026-03-28 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:50.804594 | orchestrator | 2026-03-28 01:07:50 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:50.805608 | orchestrator | 2026-03-28 01:07:50 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:50.805983 | orchestrator | 2026-03-28 01:07:50 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:07:50.807012 | orchestrator | 2026-03-28 01:07:50 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:07:50.807194 | orchestrator | 2026-03-28 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:53.849009 | orchestrator | 2026-03-28 01:07:53 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:53.849940 | orchestrator | 2026-03-28 01:07:53 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:53.851668 | orchestrator | 2026-03-28 01:07:53 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:07:53.855211 | orchestrator | 2026-03-28 01:07:53 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:07:53.855275 | orchestrator | 2026-03-28 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:56.917238 | orchestrator | 2026-03-28 01:07:56 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:56.917325 | orchestrator | 2026-03-28 01:07:56 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:56.917337 | orchestrator | 2026-03-28 01:07:56 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:07:56.917345 | orchestrator | 2026-03-28 01:07:56 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:07:56.917354 | orchestrator | 2026-03-28 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:59.954177 | orchestrator | 2026-03-28 01:07:59 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:07:59.954897 | orchestrator | 2026-03-28 01:07:59 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:07:59.957097 | orchestrator | 2026-03-28 01:07:59 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:07:59.958266 | orchestrator | 2026-03-28 01:07:59 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:07:59.958363 | orchestrator | 2026-03-28 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:03.001836 | orchestrator | 2026-03-28 01:08:03 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:08:03.003549 | orchestrator | 2026-03-28 01:08:03 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:03.004381 | orchestrator | 2026-03-28 01:08:03 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:03.005816 | orchestrator | 2026-03-28 01:08:03 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:08:03.006183 | orchestrator | 2026-03-28 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:06.050910 | orchestrator | 2026-03-28 01:08:06 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:08:06.053617 | orchestrator | 2026-03-28 01:08:06 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:06.054315 | orchestrator | 2026-03-28 01:08:06 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:06.056550 | orchestrator | 2026-03-28 01:08:06 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:08:06.056584 | orchestrator | 2026-03-28 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:09.178519 | orchestrator | 2026-03-28 01:08:09 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:08:09.178776 | orchestrator | 2026-03-28 01:08:09 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:09.179309 | orchestrator | 2026-03-28 01:08:09 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:09.180026 | orchestrator | 2026-03-28 01:08:09 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:08:09.180066 | orchestrator | 2026-03-28 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:12.209088 | orchestrator | 2026-03-28 01:08:12 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:08:12.209260 | orchestrator | 2026-03-28 01:08:12 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:12.211139 | orchestrator | 2026-03-28 01:08:12 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:12.211219 | orchestrator | 2026-03-28 01:08:12 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:08:12.211230 | orchestrator | 2026-03-28 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:15.252190 | orchestrator | 2026-03-28 01:08:15 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:08:15.253124 | orchestrator | 2026-03-28 01:08:15 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:15.260660 | orchestrator | 2026-03-28 01:08:15 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:15.260734 | orchestrator | 2026-03-28 01:08:15 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:08:15.260750 | orchestrator | 2026-03-28 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:18.286510 | orchestrator | 2026-03-28 01:08:18 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:08:18.286736 | orchestrator | 2026-03-28 01:08:18 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:18.287430 | orchestrator | 2026-03-28 01:08:18 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:18.287940 | orchestrator | 2026-03-28 01:08:18 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state STARTED 2026-03-28 01:08:18.288693 | orchestrator | 2026-03-28 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:21.317179 | orchestrator | 2026-03-28 01:08:21 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:08:21.317696 | orchestrator | 2026-03-28 01:08:21 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:21.318820 | orchestrator | 2026-03-28 01:08:21 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:21.319334 | orchestrator | 2026-03-28 01:08:21 | INFO  | Task 59ed3a9e-c524-47b7-9f16-21c5da18d77a is in state SUCCESS 2026-03-28 01:08:21.320615 | orchestrator | 2026-03-28 01:08:21 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:08:21.320748 | orchestrator | 2026-03-28 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:24.367784 | orchestrator | 2026-03-28 01:08:24 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state STARTED 2026-03-28 01:08:24.368340 | orchestrator | 2026-03-28 01:08:24 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:24.373550 | orchestrator | 2026-03-28 01:08:24 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:24.373770 | orchestrator | 2026-03-28 01:08:24 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:08:24.374061 | orchestrator | 2026-03-28 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:27.414207 | orchestrator | 2026-03-28 01:08:27 | INFO  | Task ddbedda6-d42e-483c-8cf2-219f7331daaa is in state SUCCESS 2026-03-28 01:08:27.415562 | orchestrator | 2026-03-28 01:08:27.415620 | orchestrator | 2026-03-28 01:08:27.415633 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:08:27.415646 | orchestrator | 2026-03-28 01:08:27.415695 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:08:27.415752 | orchestrator | Saturday 28 March 2026 01:07:31 +0000 (0:00:00.194) 0:00:00.194 ******** 2026-03-28 01:08:27.415772 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:08:27.415790 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:08:27.415921 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:08:27.415941 | orchestrator | 2026-03-28 01:08:27.415952 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:08:27.415971 | orchestrator | Saturday 28 March 2026 01:07:31 +0000 (0:00:00.335) 0:00:00.530 ******** 2026-03-28 01:08:27.415990 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-28 01:08:27.416010 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-28 01:08:27.416029 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-28 01:08:27.416046 | orchestrator | 2026-03-28 01:08:27.416065 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-28 01:08:27.416084 | orchestrator | 2026-03-28 01:08:27.416104 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-28 01:08:27.416123 | orchestrator | Saturday 28 March 2026 01:07:32 +0000 (0:00:00.817) 0:00:01.348 ******** 2026-03-28 01:08:27.416143 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:08:27.416162 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:08:27.416177 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:08:27.416188 | orchestrator | 2026-03-28 01:08:27.416199 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:08:27.416212 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:08:27.416225 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:08:27.416237 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:08:27.416248 | orchestrator | 2026-03-28 01:08:27.416258 | orchestrator | 2026-03-28 01:08:27.416269 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:08:27.416280 | orchestrator | Saturday 28 March 2026 01:07:33 +0000 (0:00:00.780) 0:00:02.128 ******** 2026-03-28 01:08:27.416291 | orchestrator | =============================================================================== 2026-03-28 01:08:27.416302 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-03-28 01:08:27.416313 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.78s 2026-03-28 01:08:27.416323 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-03-28 01:08:27.416334 | orchestrator | 2026-03-28 01:08:27.416345 | orchestrator | 2026-03-28 01:08:27.416356 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:08:27.416367 | orchestrator | 2026-03-28 01:08:27.416377 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:08:27.416388 | orchestrator | Saturday 28 March 2026 01:07:38 +0000 (0:00:00.342) 0:00:00.342 ******** 2026-03-28 01:08:27.416399 | orchestrator | ok: [testbed-manager] 2026-03-28 01:08:27.416410 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:08:27.416420 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:08:27.416431 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:08:27.416442 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:08:27.416453 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:08:27.416463 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:08:27.416474 | orchestrator | 2026-03-28 01:08:27.416485 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:08:27.416496 | orchestrator | Saturday 28 March 2026 01:07:39 +0000 (0:00:00.971) 0:00:01.313 ******** 2026-03-28 01:08:27.416507 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-28 01:08:27.416518 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-28 01:08:27.416532 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-28 01:08:27.416566 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-28 01:08:27.416632 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-28 01:08:27.416654 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-28 01:08:27.416713 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-28 01:08:27.416733 | orchestrator | 2026-03-28 01:08:27.416752 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-28 01:08:27.416764 | orchestrator | 2026-03-28 01:08:27.416775 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-28 01:08:27.416786 | orchestrator | Saturday 28 March 2026 01:07:40 +0000 (0:00:00.772) 0:00:02.085 ******** 2026-03-28 01:08:27.416798 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:08:27.416810 | orchestrator | 2026-03-28 01:08:27.416821 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-28 01:08:27.416832 | orchestrator | Saturday 28 March 2026 01:07:42 +0000 (0:00:01.642) 0:00:03.728 ******** 2026-03-28 01:08:27.416843 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-28 01:08:27.416854 | orchestrator | 2026-03-28 01:08:27.416878 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-28 01:08:27.416890 | orchestrator | Saturday 28 March 2026 01:07:46 +0000 (0:00:04.279) 0:00:08.007 ******** 2026-03-28 01:08:27.416901 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-28 01:08:27.416930 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-28 01:08:27.416950 | orchestrator | 2026-03-28 01:08:27.416969 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-28 01:08:27.416988 | orchestrator | Saturday 28 March 2026 01:07:53 +0000 (0:00:07.231) 0:00:15.239 ******** 2026-03-28 01:08:27.417006 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-28 01:08:27.417023 | orchestrator | 2026-03-28 01:08:27.417039 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-28 01:08:27.417058 | orchestrator | Saturday 28 March 2026 01:07:57 +0000 (0:00:03.865) 0:00:19.105 ******** 2026-03-28 01:08:27.417076 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:08:27.417095 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-28 01:08:27.417114 | orchestrator | 2026-03-28 01:08:27.417132 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-28 01:08:27.417151 | orchestrator | Saturday 28 March 2026 01:08:03 +0000 (0:00:05.444) 0:00:24.549 ******** 2026-03-28 01:08:27.417164 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-28 01:08:27.417175 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-28 01:08:27.417186 | orchestrator | 2026-03-28 01:08:27.417197 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-28 01:08:27.417208 | orchestrator | Saturday 28 March 2026 01:08:09 +0000 (0:00:06.243) 0:00:30.793 ******** 2026-03-28 01:08:27.417219 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-28 01:08:27.417230 | orchestrator | 2026-03-28 01:08:27.417241 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:08:27.417252 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:08:27.417264 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:08:27.417274 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:08:27.417298 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:08:27.417309 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:08:27.417320 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:08:27.417331 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:08:27.417341 | orchestrator | 2026-03-28 01:08:27.417352 | orchestrator | 2026-03-28 01:08:27.417363 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:08:27.417374 | orchestrator | Saturday 28 March 2026 01:08:17 +0000 (0:00:08.140) 0:00:38.933 ******** 2026-03-28 01:08:27.417385 | orchestrator | =============================================================================== 2026-03-28 01:08:27.417396 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 8.14s 2026-03-28 01:08:27.417407 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.23s 2026-03-28 01:08:27.417418 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.24s 2026-03-28 01:08:27.417429 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 5.44s 2026-03-28 01:08:27.417440 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.28s 2026-03-28 01:08:27.417451 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.87s 2026-03-28 01:08:27.417462 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.64s 2026-03-28 01:08:27.417472 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.97s 2026-03-28 01:08:27.417483 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2026-03-28 01:08:27.417494 | orchestrator | 2026-03-28 01:08:27.417505 | orchestrator | 2026-03-28 01:08:27.417516 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:08:27.417526 | orchestrator | 2026-03-28 01:08:27.417537 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:08:27.417548 | orchestrator | Saturday 28 March 2026 01:06:24 +0000 (0:00:00.310) 0:00:00.310 ******** 2026-03-28 01:08:27.417559 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:08:27.417570 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:08:27.417580 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:08:27.417591 | orchestrator | 2026-03-28 01:08:27.417602 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:08:27.417613 | orchestrator | Saturday 28 March 2026 01:06:24 +0000 (0:00:00.523) 0:00:00.834 ******** 2026-03-28 01:08:27.417625 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-28 01:08:27.417652 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-28 01:08:27.417693 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-28 01:08:27.417712 | orchestrator | 2026-03-28 01:08:27.417730 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-28 01:08:27.417747 | orchestrator | 2026-03-28 01:08:27.417765 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-28 01:08:27.417781 | orchestrator | Saturday 28 March 2026 01:06:25 +0000 (0:00:00.849) 0:00:01.684 ******** 2026-03-28 01:08:27.417811 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:08:27.417832 | orchestrator | 2026-03-28 01:08:27.417851 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-28 01:08:27.417868 | orchestrator | Saturday 28 March 2026 01:06:26 +0000 (0:00:00.686) 0:00:02.370 ******** 2026-03-28 01:08:27.417886 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-28 01:08:27.417916 | orchestrator | 2026-03-28 01:08:27.417935 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-28 01:08:27.417955 | orchestrator | Saturday 28 March 2026 01:06:30 +0000 (0:00:03.767) 0:00:06.138 ******** 2026-03-28 01:08:27.417973 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-28 01:08:27.417993 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-28 01:08:27.418011 | orchestrator | 2026-03-28 01:08:27.418097 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-28 01:08:27.418117 | orchestrator | Saturday 28 March 2026 01:06:37 +0000 (0:00:07.389) 0:00:13.528 ******** 2026-03-28 01:08:27.418135 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:08:27.418155 | orchestrator | 2026-03-28 01:08:27.418175 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-28 01:08:27.418194 | orchestrator | Saturday 28 March 2026 01:06:41 +0000 (0:00:03.594) 0:00:17.122 ******** 2026-03-28 01:08:27.418242 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:08:27.418263 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-28 01:08:27.418282 | orchestrator | 2026-03-28 01:08:27.418296 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-28 01:08:27.418307 | orchestrator | Saturday 28 March 2026 01:06:45 +0000 (0:00:04.427) 0:00:21.550 ******** 2026-03-28 01:08:27.418318 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:08:27.418329 | orchestrator | 2026-03-28 01:08:27.418340 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-28 01:08:27.418351 | orchestrator | Saturday 28 March 2026 01:06:49 +0000 (0:00:04.013) 0:00:25.564 ******** 2026-03-28 01:08:27.418362 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-28 01:08:27.418372 | orchestrator | 2026-03-28 01:08:27.418383 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-28 01:08:27.418410 | orchestrator | Saturday 28 March 2026 01:06:53 +0000 (0:00:04.276) 0:00:29.841 ******** 2026-03-28 01:08:27.418421 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:27.418432 | orchestrator | 2026-03-28 01:08:27.418443 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-28 01:08:27.418454 | orchestrator | Saturday 28 March 2026 01:06:57 +0000 (0:00:03.715) 0:00:33.556 ******** 2026-03-28 01:08:27.418465 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:27.418475 | orchestrator | 2026-03-28 01:08:27.418487 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-28 01:08:27.418498 | orchestrator | Saturday 28 March 2026 01:07:01 +0000 (0:00:04.426) 0:00:37.982 ******** 2026-03-28 01:08:27.418509 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:27.418519 | orchestrator | 2026-03-28 01:08:27.418530 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-28 01:08:27.418541 | orchestrator | Saturday 28 March 2026 01:07:05 +0000 (0:00:03.924) 0:00:41.907 ******** 2026-03-28 01:08:27.418556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.418604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.418618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.418631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.418644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.418656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.418768 | orchestrator | 2026-03-28 01:08:27.418783 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-28 01:08:27.418795 | orchestrator | Saturday 28 March 2026 01:07:07 +0000 (0:00:01.614) 0:00:43.522 ******** 2026-03-28 01:08:27.418806 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:27.418817 | orchestrator | 2026-03-28 01:08:27.418828 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-28 01:08:27.418839 | orchestrator | Saturday 28 March 2026 01:07:07 +0000 (0:00:00.129) 0:00:43.651 ******** 2026-03-28 01:08:27.418850 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:27.418865 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:08:27.418875 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:08:27.418885 | orchestrator | 2026-03-28 01:08:27.418894 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-28 01:08:27.418904 | orchestrator | Saturday 28 March 2026 01:07:08 +0000 (0:00:00.575) 0:00:44.227 ******** 2026-03-28 01:08:27.418914 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:08:27.418924 | orchestrator | 2026-03-28 01:08:27.418941 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-28 01:08:27.418951 | orchestrator | Saturday 28 March 2026 01:07:09 +0000 (0:00:01.025) 0:00:45.253 ******** 2026-03-28 01:08:27.418962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.418973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.418984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.419001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.419024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.419035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.419045 | orchestrator | 2026-03-28 01:08:27.419055 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-28 01:08:27.419065 | orchestrator | Saturday 28 March 2026 01:07:12 +0000 (0:00:02.765) 0:00:48.018 ******** 2026-03-28 01:08:27.419075 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:08:27.419085 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:08:27.419096 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:08:27.419113 | orchestrator | 2026-03-28 01:08:27.419130 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-28 01:08:27.419146 | orchestrator | Saturday 28 March 2026 01:07:12 +0000 (0:00:00.362) 0:00:48.380 ******** 2026-03-28 01:08:27.419163 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:08:27.419180 | orchestrator | 2026-03-28 01:08:27.419194 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-28 01:08:27.419210 | orchestrator | Saturday 28 March 2026 01:07:13 +0000 (0:00:00.889) 0:00:49.269 ******** 2026-03-28 01:08:27.419226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.419267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.419305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.419318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.419329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.419339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.419357 | orchestrator | 2026-03-28 01:08:27.419367 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-28 01:08:27.419377 | orchestrator | Saturday 28 March 2026 01:07:16 +0000 (0:00:02.814) 0:00:52.084 ******** 2026-03-28 01:08:27.419388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:08:27.419412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:08:27.419423 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:27.419433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:08:27.419443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:08:27.419460 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:08:27.419470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:08:27.419486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:08:27.419496 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:08:27.419506 | orchestrator | 2026-03-28 01:08:27.419515 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-28 01:08:27.419525 | orchestrator | Saturday 28 March 2026 01:07:16 +0000 (0:00:00.773) 0:00:52.857 ******** 2026-03-28 01:08:27.419542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:08:27.419553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:08:27.419568 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:27.419579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:08:27.419589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:08:27.419599 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:08:27.419619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:08:27.419631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:08:27.419641 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:08:27.419651 | orchestrator | 2026-03-28 01:08:27.419661 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-28 01:08:27.419699 | orchestrator | Saturday 28 March 2026 01:07:18 +0000 (0:00:01.445) 0:00:54.303 ******** 2026-03-28 01:08:27.419710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.419727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.419742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.419762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.419773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.419789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.419799 | orchestrator | 2026-03-28 01:08:27.419809 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-28 01:08:27.419819 | orchestrator | Saturday 28 March 2026 01:07:21 +0000 (0:00:02.984) 0:00:57.287 ******** 2026-03-28 01:08:27.419830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.419844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.419862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.419873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.419889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.419899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.419909 | orchestrator | 2026-03-28 01:08:27.419919 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-28 01:08:27.419929 | orchestrator | Saturday 28 March 2026 01:07:26 +0000 (0:00:04.943) 0:01:02.230 ******** 2026-03-28 01:08:27.419950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:08:27.419961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:08:27.419981 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:27.419992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:08:27.420002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:08:27.420012 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:08:27.420022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:08:27.420046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:08:27.420057 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:08:27.420066 | orchestrator | 2026-03-28 01:08:27.420076 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-28 01:08:27.420086 | orchestrator | Saturday 28 March 2026 01:07:27 +0000 (0:00:00.888) 0:01:03.119 ******** 2026-03-28 01:08:27.420103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.420114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.420124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:08:27.420139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.420157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.420180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:27.420201 | orchestrator | 2026-03-28 01:08:27.420224 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-28 01:08:27.420239 | orchestrator | Saturday 28 March 2026 01:07:29 +0000 (0:00:02.260) 0:01:05.380 ******** 2026-03-28 01:08:27.420254 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:27.420269 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:08:27.420289 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:08:27.420305 | orchestrator | 2026-03-28 01:08:27.420320 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-28 01:08:27.420355 | orchestrator | Saturday 28 March 2026 01:07:29 +0000 (0:00:00.324) 0:01:05.704 ******** 2026-03-28 01:08:27.420387 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:27.420404 | orchestrator | 2026-03-28 01:08:27.420417 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-28 01:08:27.420427 | orchestrator | Saturday 28 March 2026 01:07:32 +0000 (0:00:02.490) 0:01:08.195 ******** 2026-03-28 01:08:27.420436 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:27.420446 | orchestrator | 2026-03-28 01:08:27.420456 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-28 01:08:27.420466 | orchestrator | Saturday 28 March 2026 01:07:34 +0000 (0:00:02.415) 0:01:10.611 ******** 2026-03-28 01:08:27.420476 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:27.420485 | orchestrator | 2026-03-28 01:08:27.420495 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-28 01:08:27.420505 | orchestrator | Saturday 28 March 2026 01:07:54 +0000 (0:00:20.046) 0:01:30.658 ******** 2026-03-28 01:08:27.420515 | orchestrator | 2026-03-28 01:08:27.420524 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-28 01:08:27.420534 | orchestrator | Saturday 28 March 2026 01:07:54 +0000 (0:00:00.103) 0:01:30.761 ******** 2026-03-28 01:08:27.420544 | orchestrator | 2026-03-28 01:08:27.420554 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-28 01:08:27.420563 | orchestrator | Saturday 28 March 2026 01:07:54 +0000 (0:00:00.109) 0:01:30.870 ******** 2026-03-28 01:08:27.420573 | orchestrator | 2026-03-28 01:08:27.420583 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-28 01:08:27.420593 | orchestrator | Saturday 28 March 2026 01:07:54 +0000 (0:00:00.077) 0:01:30.948 ******** 2026-03-28 01:08:27.420603 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:27.420613 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:08:27.420622 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:08:27.420632 | orchestrator | 2026-03-28 01:08:27.420642 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-28 01:08:27.420651 | orchestrator | Saturday 28 March 2026 01:08:12 +0000 (0:00:17.129) 0:01:48.078 ******** 2026-03-28 01:08:27.420661 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:27.420740 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:08:27.420751 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:08:27.420772 | orchestrator | 2026-03-28 01:08:27.420782 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:08:27.420791 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:08:27.420801 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:08:27.420809 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:08:27.420817 | orchestrator | 2026-03-28 01:08:27.420825 | orchestrator | 2026-03-28 01:08:27.420838 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:08:27.420847 | orchestrator | Saturday 28 March 2026 01:08:26 +0000 (0:00:14.885) 0:02:02.964 ******** 2026-03-28 01:08:27.420855 | orchestrator | =============================================================================== 2026-03-28 01:08:27.420863 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 20.05s 2026-03-28 01:08:27.420877 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.13s 2026-03-28 01:08:27.420899 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.89s 2026-03-28 01:08:27.420913 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.39s 2026-03-28 01:08:27.420927 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.94s 2026-03-28 01:08:27.420941 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.43s 2026-03-28 01:08:27.420954 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.43s 2026-03-28 01:08:27.420967 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.28s 2026-03-28 01:08:27.420978 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 4.01s 2026-03-28 01:08:27.420987 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.92s 2026-03-28 01:08:27.420998 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.77s 2026-03-28 01:08:27.421012 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.72s 2026-03-28 01:08:27.421025 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.59s 2026-03-28 01:08:27.421039 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.98s 2026-03-28 01:08:27.421053 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.81s 2026-03-28 01:08:27.421066 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.77s 2026-03-28 01:08:27.421081 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.49s 2026-03-28 01:08:27.421091 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.42s 2026-03-28 01:08:27.421099 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.26s 2026-03-28 01:08:27.421107 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.61s 2026-03-28 01:08:27.421115 | orchestrator | 2026-03-28 01:08:27 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:27.421123 | orchestrator | 2026-03-28 01:08:27 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:27.421131 | orchestrator | 2026-03-28 01:08:27 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:08:27.421140 | orchestrator | 2026-03-28 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:30.464710 | orchestrator | 2026-03-28 01:08:30 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:30.465492 | orchestrator | 2026-03-28 01:08:30 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:30.467175 | orchestrator | 2026-03-28 01:08:30 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:08:30.468738 | orchestrator | 2026-03-28 01:08:30 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:08:30.468895 | orchestrator | 2026-03-28 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:33.520132 | orchestrator | 2026-03-28 01:08:33 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:33.521883 | orchestrator | 2026-03-28 01:08:33 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:33.523515 | orchestrator | 2026-03-28 01:08:33 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:08:33.526737 | orchestrator | 2026-03-28 01:08:33 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:08:33.526766 | orchestrator | 2026-03-28 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:36.585380 | orchestrator | 2026-03-28 01:08:36 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:36.585819 | orchestrator | 2026-03-28 01:08:36 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:36.586752 | orchestrator | 2026-03-28 01:08:36 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:08:36.587833 | orchestrator | 2026-03-28 01:08:36 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:08:36.587861 | orchestrator | 2026-03-28 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:39.622482 | orchestrator | 2026-03-28 01:08:39 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:39.622877 | orchestrator | 2026-03-28 01:08:39 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:39.623900 | orchestrator | 2026-03-28 01:08:39 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:08:39.624993 | orchestrator | 2026-03-28 01:08:39 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:08:39.625013 | orchestrator | 2026-03-28 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:42.662102 | orchestrator | 2026-03-28 01:08:42 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:42.663325 | orchestrator | 2026-03-28 01:08:42 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:42.664993 | orchestrator | 2026-03-28 01:08:42 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:08:42.666221 | orchestrator | 2026-03-28 01:08:42 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:08:42.666260 | orchestrator | 2026-03-28 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:45.707628 | orchestrator | 2026-03-28 01:08:45 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:45.709070 | orchestrator | 2026-03-28 01:08:45 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:45.710592 | orchestrator | 2026-03-28 01:08:45 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:08:45.711793 | orchestrator | 2026-03-28 01:08:45 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:08:45.711839 | orchestrator | 2026-03-28 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:48.754872 | orchestrator | 2026-03-28 01:08:48 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:48.757495 | orchestrator | 2026-03-28 01:08:48 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:48.760260 | orchestrator | 2026-03-28 01:08:48 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:08:48.763249 | orchestrator | 2026-03-28 01:08:48 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:08:48.763310 | orchestrator | 2026-03-28 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:51.820302 | orchestrator | 2026-03-28 01:08:51 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:51.820693 | orchestrator | 2026-03-28 01:08:51 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:51.821662 | orchestrator | 2026-03-28 01:08:51 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:08:51.822376 | orchestrator | 2026-03-28 01:08:51 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:08:51.822678 | orchestrator | 2026-03-28 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:54.849330 | orchestrator | 2026-03-28 01:08:54 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:54.849625 | orchestrator | 2026-03-28 01:08:54 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:54.853067 | orchestrator | 2026-03-28 01:08:54 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:08:54.853153 | orchestrator | 2026-03-28 01:08:54 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:08:54.853176 | orchestrator | 2026-03-28 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:57.895946 | orchestrator | 2026-03-28 01:08:57 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:08:57.897588 | orchestrator | 2026-03-28 01:08:57 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:08:57.898688 | orchestrator | 2026-03-28 01:08:57 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:08:57.900080 | orchestrator | 2026-03-28 01:08:57 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:08:57.900116 | orchestrator | 2026-03-28 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:00.942782 | orchestrator | 2026-03-28 01:09:00 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:09:00.943403 | orchestrator | 2026-03-28 01:09:00 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:00.945152 | orchestrator | 2026-03-28 01:09:00 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:00.946474 | orchestrator | 2026-03-28 01:09:00 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:00.946591 | orchestrator | 2026-03-28 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:03.982814 | orchestrator | 2026-03-28 01:09:03 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:09:03.982978 | orchestrator | 2026-03-28 01:09:03 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:03.983088 | orchestrator | 2026-03-28 01:09:03 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:03.983821 | orchestrator | 2026-03-28 01:09:03 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:03.983888 | orchestrator | 2026-03-28 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:07.018879 | orchestrator | 2026-03-28 01:09:07 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:09:07.019589 | orchestrator | 2026-03-28 01:09:07 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:07.020734 | orchestrator | 2026-03-28 01:09:07 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:07.021655 | orchestrator | 2026-03-28 01:09:07 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:07.021732 | orchestrator | 2026-03-28 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:10.076946 | orchestrator | 2026-03-28 01:09:10 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:09:10.077208 | orchestrator | 2026-03-28 01:09:10 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:10.077912 | orchestrator | 2026-03-28 01:09:10 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:10.078613 | orchestrator | 2026-03-28 01:09:10 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:10.078704 | orchestrator | 2026-03-28 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:13.104476 | orchestrator | 2026-03-28 01:09:13 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:09:13.104740 | orchestrator | 2026-03-28 01:09:13 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:13.105716 | orchestrator | 2026-03-28 01:09:13 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:13.106977 | orchestrator | 2026-03-28 01:09:13 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:13.107022 | orchestrator | 2026-03-28 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:16.151532 | orchestrator | 2026-03-28 01:09:16 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:09:16.153252 | orchestrator | 2026-03-28 01:09:16 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:16.163918 | orchestrator | 2026-03-28 01:09:16 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:16.249953 | orchestrator | 2026-03-28 01:09:16 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:16.250092 | orchestrator | 2026-03-28 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:19.278156 | orchestrator | 2026-03-28 01:09:19 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:09:19.279347 | orchestrator | 2026-03-28 01:09:19 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:19.280928 | orchestrator | 2026-03-28 01:09:19 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:19.282459 | orchestrator | 2026-03-28 01:09:19 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:19.282486 | orchestrator | 2026-03-28 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:22.324803 | orchestrator | 2026-03-28 01:09:22 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:09:22.325937 | orchestrator | 2026-03-28 01:09:22 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:22.327007 | orchestrator | 2026-03-28 01:09:22 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:22.327940 | orchestrator | 2026-03-28 01:09:22 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:22.327978 | orchestrator | 2026-03-28 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:25.357935 | orchestrator | 2026-03-28 01:09:25 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state STARTED 2026-03-28 01:09:25.358842 | orchestrator | 2026-03-28 01:09:25 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:25.359917 | orchestrator | 2026-03-28 01:09:25 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:25.360971 | orchestrator | 2026-03-28 01:09:25 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:25.361160 | orchestrator | 2026-03-28 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:28.385188 | orchestrator | 2026-03-28 01:09:28 | INFO  | Task becd5667-cf8b-4e44-b0a9-6aa219b81e8a is in state SUCCESS 2026-03-28 01:09:28.386799 | orchestrator | 2026-03-28 01:09:28.386849 | orchestrator | 2026-03-28 01:09:28.386870 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:09:28.386889 | orchestrator | 2026-03-28 01:09:28.386905 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:09:28.386916 | orchestrator | Saturday 28 March 2026 01:04:06 +0000 (0:00:00.318) 0:00:00.318 ******** 2026-03-28 01:09:28.386927 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:09:28.386940 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:09:28.386951 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:09:28.386962 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:09:28.386973 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:09:28.386984 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:09:28.386995 | orchestrator | 2026-03-28 01:09:28.387006 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:09:28.387017 | orchestrator | Saturday 28 March 2026 01:04:07 +0000 (0:00:01.014) 0:00:01.332 ******** 2026-03-28 01:09:28.387029 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-28 01:09:28.387041 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-28 01:09:28.387052 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-28 01:09:28.387063 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-28 01:09:28.387074 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-28 01:09:28.387085 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-28 01:09:28.387141 | orchestrator | 2026-03-28 01:09:28.387278 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-28 01:09:28.387291 | orchestrator | 2026-03-28 01:09:28.387302 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 01:09:28.387314 | orchestrator | Saturday 28 March 2026 01:04:08 +0000 (0:00:00.859) 0:00:02.192 ******** 2026-03-28 01:09:28.387326 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:09:28.387338 | orchestrator | 2026-03-28 01:09:28.387349 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-28 01:09:28.387360 | orchestrator | Saturday 28 March 2026 01:04:10 +0000 (0:00:01.455) 0:00:03.647 ******** 2026-03-28 01:09:28.387371 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:09:28.387382 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:09:28.387395 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:09:28.387409 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:09:28.387421 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:09:28.387434 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:09:28.387481 | orchestrator | 2026-03-28 01:09:28.387499 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-28 01:09:28.387513 | orchestrator | Saturday 28 March 2026 01:04:11 +0000 (0:00:01.336) 0:00:04.983 ******** 2026-03-28 01:09:28.387526 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:09:28.387539 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:09:28.387551 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:09:28.387565 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:09:28.387577 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:09:28.387590 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:09:28.387659 | orchestrator | 2026-03-28 01:09:28.387672 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-28 01:09:28.387684 | orchestrator | Saturday 28 March 2026 01:04:12 +0000 (0:00:01.188) 0:00:06.172 ******** 2026-03-28 01:09:28.387695 | orchestrator | ok: [testbed-node-0] => { 2026-03-28 01:09:28.387722 | orchestrator |  "changed": false, 2026-03-28 01:09:28.387733 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:09:28.387745 | orchestrator | } 2026-03-28 01:09:28.387756 | orchestrator | ok: [testbed-node-1] => { 2026-03-28 01:09:28.387767 | orchestrator |  "changed": false, 2026-03-28 01:09:28.387799 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:09:28.387810 | orchestrator | } 2026-03-28 01:09:28.387821 | orchestrator | ok: [testbed-node-2] => { 2026-03-28 01:09:28.387831 | orchestrator |  "changed": false, 2026-03-28 01:09:28.387842 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:09:28.387853 | orchestrator | } 2026-03-28 01:09:28.387864 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 01:09:28.387874 | orchestrator |  "changed": false, 2026-03-28 01:09:28.387885 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:09:28.387896 | orchestrator | } 2026-03-28 01:09:28.387906 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 01:09:28.387917 | orchestrator |  "changed": false, 2026-03-28 01:09:28.387928 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:09:28.387939 | orchestrator | } 2026-03-28 01:09:28.387950 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 01:09:28.387960 | orchestrator |  "changed": false, 2026-03-28 01:09:28.387971 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:09:28.387982 | orchestrator | } 2026-03-28 01:09:28.387993 | orchestrator | 2026-03-28 01:09:28.388018 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-28 01:09:28.388029 | orchestrator | Saturday 28 March 2026 01:04:13 +0000 (0:00:00.934) 0:00:07.107 ******** 2026-03-28 01:09:28.388040 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.388051 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.388062 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.388072 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.388083 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.388094 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.388104 | orchestrator | 2026-03-28 01:09:28.388115 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-28 01:09:28.388126 | orchestrator | Saturday 28 March 2026 01:04:14 +0000 (0:00:00.727) 0:00:07.834 ******** 2026-03-28 01:09:28.388137 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-28 01:09:28.388148 | orchestrator | 2026-03-28 01:09:28.388159 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-28 01:09:28.388169 | orchestrator | Saturday 28 March 2026 01:04:17 +0000 (0:00:03.712) 0:00:11.547 ******** 2026-03-28 01:09:28.388180 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-28 01:09:28.388193 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-28 01:09:28.388204 | orchestrator | 2026-03-28 01:09:28.388230 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-28 01:09:28.388241 | orchestrator | Saturday 28 March 2026 01:04:24 +0000 (0:00:07.070) 0:00:18.617 ******** 2026-03-28 01:09:28.388262 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:09:28.388273 | orchestrator | 2026-03-28 01:09:28.388284 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-28 01:09:28.388295 | orchestrator | Saturday 28 March 2026 01:04:28 +0000 (0:00:03.626) 0:00:22.243 ******** 2026-03-28 01:09:28.388306 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:09:28.388317 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-28 01:09:28.388328 | orchestrator | 2026-03-28 01:09:28.388339 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-28 01:09:28.388350 | orchestrator | Saturday 28 March 2026 01:04:32 +0000 (0:00:04.216) 0:00:26.460 ******** 2026-03-28 01:09:28.388361 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:09:28.388372 | orchestrator | 2026-03-28 01:09:28.388383 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-28 01:09:28.388394 | orchestrator | Saturday 28 March 2026 01:04:36 +0000 (0:00:04.129) 0:00:30.589 ******** 2026-03-28 01:09:28.388404 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-28 01:09:28.388415 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-28 01:09:28.388426 | orchestrator | 2026-03-28 01:09:28.388437 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 01:09:28.388448 | orchestrator | Saturday 28 March 2026 01:04:43 +0000 (0:00:06.756) 0:00:37.346 ******** 2026-03-28 01:09:28.388459 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.388470 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.388480 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.388491 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.388502 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.388513 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.388653 | orchestrator | 2026-03-28 01:09:28.388667 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-28 01:09:28.388678 | orchestrator | Saturday 28 March 2026 01:04:44 +0000 (0:00:00.823) 0:00:38.169 ******** 2026-03-28 01:09:28.388689 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.388700 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.388711 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.388722 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.388738 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.388757 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.388775 | orchestrator | 2026-03-28 01:09:28.388791 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-28 01:09:28.388808 | orchestrator | Saturday 28 March 2026 01:04:47 +0000 (0:00:02.747) 0:00:40.917 ******** 2026-03-28 01:09:28.388826 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:09:28.388844 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:09:28.388862 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:09:28.388882 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:09:28.388900 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:09:28.388918 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:09:28.388936 | orchestrator | 2026-03-28 01:09:28.388955 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-28 01:09:28.388974 | orchestrator | Saturday 28 March 2026 01:04:49 +0000 (0:00:02.098) 0:00:43.016 ******** 2026-03-28 01:09:28.388991 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.389010 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.389029 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.389049 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.389065 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.389084 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.389101 | orchestrator | 2026-03-28 01:09:28.389120 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-28 01:09:28.389139 | orchestrator | Saturday 28 March 2026 01:04:52 +0000 (0:00:03.116) 0:00:46.132 ******** 2026-03-28 01:09:28.389204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.389247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.389261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.389273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.389284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.389309 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.389324 | orchestrator | 2026-03-28 01:09:28.389357 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-28 01:09:28.389369 | orchestrator | Saturday 28 March 2026 01:04:56 +0000 (0:00:04.400) 0:00:50.533 ******** 2026-03-28 01:09:28.389380 | orchestrator | [WARNING]: Skipped 2026-03-28 01:09:28.389393 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-28 01:09:28.389412 | orchestrator | due to this access issue: 2026-03-28 01:09:28.389423 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-28 01:09:28.389434 | orchestrator | a directory 2026-03-28 01:09:28.389445 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:09:28.389456 | orchestrator | 2026-03-28 01:09:28.389474 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 01:09:28.389492 | orchestrator | Saturday 28 March 2026 01:04:58 +0000 (0:00:01.282) 0:00:51.816 ******** 2026-03-28 01:09:28.389510 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:09:28.389530 | orchestrator | 2026-03-28 01:09:28.389547 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-28 01:09:28.389564 | orchestrator | Saturday 28 March 2026 01:05:00 +0000 (0:00:02.690) 0:00:54.507 ******** 2026-03-28 01:09:28.389647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.389673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.389715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.389734 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.389768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.389789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.389809 | orchestrator | 2026-03-28 01:09:28.389830 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-28 01:09:28.389849 | orchestrator | Saturday 28 March 2026 01:05:05 +0000 (0:00:04.691) 0:00:59.198 ******** 2026-03-28 01:09:28.389869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.389902 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.389931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.389952 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.389985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.390006 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.390113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.390136 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.390156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.390190 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.390209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.390226 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.390246 | orchestrator | 2026-03-28 01:09:28.390263 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-28 01:09:28.390282 | orchestrator | Saturday 28 March 2026 01:05:10 +0000 (0:00:04.653) 0:01:03.852 ******** 2026-03-28 01:09:28.390348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.390371 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.390407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.390427 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.390443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.390474 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.390492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.390511 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.390544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.390564 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.390584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.390636 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.390659 | orchestrator | 2026-03-28 01:09:28.390679 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-28 01:09:28.390706 | orchestrator | Saturday 28 March 2026 01:05:14 +0000 (0:00:04.657) 0:01:08.510 ******** 2026-03-28 01:09:28.390725 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.390744 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.390763 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.390783 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.390802 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.390820 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.390839 | orchestrator | 2026-03-28 01:09:28.390859 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-28 01:09:28.390878 | orchestrator | Saturday 28 March 2026 01:05:20 +0000 (0:00:05.560) 0:01:14.071 ******** 2026-03-28 01:09:28.390897 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.390915 | orchestrator | 2026-03-28 01:09:28.390934 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-28 01:09:28.390969 | orchestrator | Saturday 28 March 2026 01:05:20 +0000 (0:00:00.355) 0:01:14.426 ******** 2026-03-28 01:09:28.390989 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.391008 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.391075 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.391094 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.391113 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.391132 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.391150 | orchestrator | 2026-03-28 01:09:28.391169 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-28 01:09:28.391188 | orchestrator | Saturday 28 March 2026 01:05:22 +0000 (0:00:01.269) 0:01:15.695 ******** 2026-03-28 01:09:28.391210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.391232 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.391251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.391271 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.391300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.391322 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.391357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.391392 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.391413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.391434 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.391454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.391474 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.391494 | orchestrator | 2026-03-28 01:09:28.391513 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-28 01:09:28.391533 | orchestrator | Saturday 28 March 2026 01:05:25 +0000 (0:00:03.413) 0:01:19.109 ******** 2026-03-28 01:09:28.391562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.391595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.391679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.391699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.391718 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.391743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.391762 | orchestrator | 2026-03-28 01:09:28.391779 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-28 01:09:28.391797 | orchestrator | Saturday 28 March 2026 01:05:30 +0000 (0:00:05.237) 0:01:24.346 ******** 2026-03-28 01:09:28.391826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.391856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.391876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.391892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.391916 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.391941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.391951 | orchestrator | 2026-03-28 01:09:28.391961 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-28 01:09:28.391972 | orchestrator | Saturday 28 March 2026 01:05:38 +0000 (0:00:07.362) 0:01:31.709 ******** 2026-03-28 01:09:28.391982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.391993 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.392003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.392013 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.392023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.392038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.392054 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.392065 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.392081 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.392091 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.392101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.392112 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.392122 | orchestrator | 2026-03-28 01:09:28.392131 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-28 01:09:28.392141 | orchestrator | Saturday 28 March 2026 01:05:42 +0000 (0:00:03.952) 0:01:35.661 ******** 2026-03-28 01:09:28.392151 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.392160 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:28.392170 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.392179 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.392189 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:09:28.392198 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:09:28.392208 | orchestrator | 2026-03-28 01:09:28.392218 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-28 01:09:28.392228 | orchestrator | Saturday 28 March 2026 01:05:45 +0000 (0:00:03.208) 0:01:38.869 ******** 2026-03-28 01:09:28.392238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.392254 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.392269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.392279 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.392297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.392308 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.392318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.392329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.392344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.392365 | orchestrator | 2026-03-28 01:09:28.392375 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-28 01:09:28.392385 | orchestrator | Saturday 28 March 2026 01:05:50 +0000 (0:00:05.332) 0:01:44.202 ******** 2026-03-28 01:09:28.392395 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.392404 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.392414 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.392424 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.392433 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.392443 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.392453 | orchestrator | 2026-03-28 01:09:28.392462 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-28 01:09:28.392472 | orchestrator | Saturday 28 March 2026 01:05:55 +0000 (0:00:04.433) 0:01:48.635 ******** 2026-03-28 01:09:28.392482 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.392492 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.392501 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.392511 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.392521 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.392530 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.392540 | orchestrator | 2026-03-28 01:09:28.392549 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-28 01:09:28.392559 | orchestrator | Saturday 28 March 2026 01:05:57 +0000 (0:00:02.728) 0:01:51.364 ******** 2026-03-28 01:09:28.392574 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.392584 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.392594 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.392628 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.392647 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.392658 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.392667 | orchestrator | 2026-03-28 01:09:28.392677 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-28 01:09:28.392687 | orchestrator | Saturday 28 March 2026 01:06:02 +0000 (0:00:04.296) 0:01:55.660 ******** 2026-03-28 01:09:28.392697 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.392706 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.392716 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.392726 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.392735 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.392745 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.392754 | orchestrator | 2026-03-28 01:09:28.392764 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-28 01:09:28.392774 | orchestrator | Saturday 28 March 2026 01:06:05 +0000 (0:00:03.286) 0:01:58.946 ******** 2026-03-28 01:09:28.392784 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.392793 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.392803 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.392813 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.392822 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.392832 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.392841 | orchestrator | 2026-03-28 01:09:28.392851 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-28 01:09:28.392867 | orchestrator | Saturday 28 March 2026 01:06:07 +0000 (0:00:02.315) 0:02:01.262 ******** 2026-03-28 01:09:28.392877 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.392887 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.392897 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.392906 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.392916 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.392925 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.392935 | orchestrator | 2026-03-28 01:09:28.392944 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-28 01:09:28.392954 | orchestrator | Saturday 28 March 2026 01:06:10 +0000 (0:00:03.311) 0:02:04.574 ******** 2026-03-28 01:09:28.392964 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:09:28.392974 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.392984 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:09:28.392993 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.393003 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:09:28.393012 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.393022 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:09:28.393032 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.393041 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:09:28.393051 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.393061 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:09:28.393070 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.393080 | orchestrator | 2026-03-28 01:09:28.393089 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-28 01:09:28.393099 | orchestrator | Saturday 28 March 2026 01:06:13 +0000 (0:00:02.697) 0:02:07.272 ******** 2026-03-28 01:09:28.393115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.393126 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.393144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.393161 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.393171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.393181 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.393191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.393202 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.393212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.393222 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.393237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.393247 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.393257 | orchestrator | 2026-03-28 01:09:28.393267 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-28 01:09:28.393277 | orchestrator | Saturday 28 March 2026 01:06:16 +0000 (0:00:03.141) 0:02:10.414 ******** 2026-03-28 01:09:28.393295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.393312 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.393322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.393333 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.393343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.393353 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.393368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.393379 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.393394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.393411 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.393421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.393431 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.393441 | orchestrator | 2026-03-28 01:09:28.393451 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-28 01:09:28.393460 | orchestrator | Saturday 28 March 2026 01:06:20 +0000 (0:00:04.047) 0:02:14.461 ******** 2026-03-28 01:09:28.393470 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.393480 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.393490 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.393499 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.393509 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.393518 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.393528 | orchestrator | 2026-03-28 01:09:28.393538 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-28 01:09:28.393548 | orchestrator | Saturday 28 March 2026 01:06:23 +0000 (0:00:02.984) 0:02:17.446 ******** 2026-03-28 01:09:28.393557 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.393567 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.393577 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.393587 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:09:28.393596 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:09:28.393723 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:09:28.393735 | orchestrator | 2026-03-28 01:09:28.393745 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-28 01:09:28.393755 | orchestrator | Saturday 28 March 2026 01:06:28 +0000 (0:00:04.479) 0:02:21.926 ******** 2026-03-28 01:09:28.393764 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.393774 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.393783 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.393793 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.393803 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.393812 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.393822 | orchestrator | 2026-03-28 01:09:28.393832 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-28 01:09:28.393842 | orchestrator | Saturday 28 March 2026 01:06:32 +0000 (0:00:04.456) 0:02:26.382 ******** 2026-03-28 01:09:28.393851 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.393861 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.393870 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.393880 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.393890 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.393899 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.393909 | orchestrator | 2026-03-28 01:09:28.393919 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-28 01:09:28.393938 | orchestrator | Saturday 28 March 2026 01:06:35 +0000 (0:00:03.186) 0:02:29.569 ******** 2026-03-28 01:09:28.393947 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.393957 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.393967 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.393977 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.393986 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.393996 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.394006 | orchestrator | 2026-03-28 01:09:28.394051 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-28 01:09:28.394063 | orchestrator | Saturday 28 March 2026 01:06:38 +0000 (0:00:02.808) 0:02:32.377 ******** 2026-03-28 01:09:28.394073 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.394083 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.394098 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.394109 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.394118 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.394128 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.394138 | orchestrator | 2026-03-28 01:09:28.394147 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-28 01:09:28.394157 | orchestrator | Saturday 28 March 2026 01:06:42 +0000 (0:00:04.022) 0:02:36.400 ******** 2026-03-28 01:09:28.394167 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.394177 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.394187 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.394196 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.394206 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.394216 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.394225 | orchestrator | 2026-03-28 01:09:28.394235 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-28 01:09:28.394245 | orchestrator | Saturday 28 March 2026 01:06:47 +0000 (0:00:04.567) 0:02:40.968 ******** 2026-03-28 01:09:28.394254 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.394264 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.394273 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.394283 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.394293 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.394303 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.394313 | orchestrator | 2026-03-28 01:09:28.394323 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-28 01:09:28.394341 | orchestrator | Saturday 28 March 2026 01:06:50 +0000 (0:00:02.753) 0:02:43.721 ******** 2026-03-28 01:09:28.394351 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.394361 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.394371 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.394380 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.394390 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.394400 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.394410 | orchestrator | 2026-03-28 01:09:28.394419 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-28 01:09:28.394429 | orchestrator | Saturday 28 March 2026 01:06:52 +0000 (0:00:02.798) 0:02:46.520 ******** 2026-03-28 01:09:28.394439 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:09:28.394450 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.394460 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:09:28.394470 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.394480 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:09:28.394490 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.394500 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:09:28.394516 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.394526 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:09:28.394536 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.394546 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:09:28.394556 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.394566 | orchestrator | 2026-03-28 01:09:28.394576 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-28 01:09:28.394586 | orchestrator | Saturday 28 March 2026 01:06:56 +0000 (0:00:03.353) 0:02:49.874 ******** 2026-03-28 01:09:28.394596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.394624 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.394641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.394651 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.394667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:09:28.394678 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.394688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.394705 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.394715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.394725 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.394735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:28.394745 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.394755 | orchestrator | 2026-03-28 01:09:28.394765 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-28 01:09:28.394775 | orchestrator | Saturday 28 March 2026 01:06:59 +0000 (0:00:03.599) 0:02:53.473 ******** 2026-03-28 01:09:28.394790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.394808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.394825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:09:28.394836 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.394847 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.394866 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:28.394877 | orchestrator | 2026-03-28 01:09:28.394887 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 01:09:28.394897 | orchestrator | Saturday 28 March 2026 01:07:03 +0000 (0:00:03.954) 0:02:57.427 ******** 2026-03-28 01:09:28.394907 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:28.394917 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:28.394927 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:28.394937 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:28.394954 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:28.394969 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:28.394979 | orchestrator | 2026-03-28 01:09:28.394989 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-28 01:09:28.394999 | orchestrator | Saturday 28 March 2026 01:07:04 +0000 (0:00:00.717) 0:02:58.144 ******** 2026-03-28 01:09:28.395009 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:28.395018 | orchestrator | 2026-03-28 01:09:28.395028 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-28 01:09:28.395038 | orchestrator | Saturday 28 March 2026 01:07:07 +0000 (0:00:02.495) 0:03:00.640 ******** 2026-03-28 01:09:28.395047 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:28.395057 | orchestrator | 2026-03-28 01:09:28.395067 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-28 01:09:28.395077 | orchestrator | Saturday 28 March 2026 01:07:09 +0000 (0:00:02.632) 0:03:03.272 ******** 2026-03-28 01:09:28.395087 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:28.395097 | orchestrator | 2026-03-28 01:09:28.395107 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:09:28.395116 | orchestrator | Saturday 28 March 2026 01:07:55 +0000 (0:00:46.295) 0:03:49.567 ******** 2026-03-28 01:09:28.395126 | orchestrator | 2026-03-28 01:09:28.395136 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:09:28.395146 | orchestrator | Saturday 28 March 2026 01:07:56 +0000 (0:00:00.398) 0:03:49.966 ******** 2026-03-28 01:09:28.395155 | orchestrator | 2026-03-28 01:09:28.395166 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:09:28.395176 | orchestrator | Saturday 28 March 2026 01:07:57 +0000 (0:00:00.710) 0:03:50.676 ******** 2026-03-28 01:09:28.395185 | orchestrator | 2026-03-28 01:09:28.395195 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:09:28.395205 | orchestrator | Saturday 28 March 2026 01:07:57 +0000 (0:00:00.104) 0:03:50.780 ******** 2026-03-28 01:09:28.395215 | orchestrator | 2026-03-28 01:09:28.395225 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:09:28.395234 | orchestrator | Saturday 28 March 2026 01:07:57 +0000 (0:00:00.124) 0:03:50.905 ******** 2026-03-28 01:09:28.395244 | orchestrator | 2026-03-28 01:09:28.395254 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:09:28.395263 | orchestrator | Saturday 28 March 2026 01:07:57 +0000 (0:00:00.152) 0:03:51.057 ******** 2026-03-28 01:09:28.395273 | orchestrator | 2026-03-28 01:09:28.395283 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-28 01:09:28.395293 | orchestrator | Saturday 28 March 2026 01:07:57 +0000 (0:00:00.188) 0:03:51.246 ******** 2026-03-28 01:09:28.395302 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:28.395312 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:09:28.395322 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:09:28.395331 | orchestrator | 2026-03-28 01:09:28.395341 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-28 01:09:28.395351 | orchestrator | Saturday 28 March 2026 01:08:34 +0000 (0:00:36.929) 0:04:28.175 ******** 2026-03-28 01:09:28.395361 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:09:28.395370 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:09:28.395380 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:09:28.395390 | orchestrator | 2026-03-28 01:09:28.395400 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:09:28.395410 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 01:09:28.395421 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-28 01:09:28.395431 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-28 01:09:28.395448 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 01:09:28.395458 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 01:09:28.395473 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 01:09:28.395483 | orchestrator | 2026-03-28 01:09:28.395493 | orchestrator | 2026-03-28 01:09:28.395503 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:09:28.395513 | orchestrator | Saturday 28 March 2026 01:09:27 +0000 (0:00:52.524) 0:05:20.700 ******** 2026-03-28 01:09:28.395523 | orchestrator | =============================================================================== 2026-03-28 01:09:28.395532 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 52.52s 2026-03-28 01:09:28.395542 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 46.30s 2026-03-28 01:09:28.395552 | orchestrator | neutron : Restart neutron-server container ----------------------------- 36.93s 2026-03-28 01:09:28.395561 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.36s 2026-03-28 01:09:28.395572 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.07s 2026-03-28 01:09:28.395581 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 6.76s 2026-03-28 01:09:28.395591 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 5.56s 2026-03-28 01:09:28.395644 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.33s 2026-03-28 01:09:28.395662 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.24s 2026-03-28 01:09:28.395672 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.69s 2026-03-28 01:09:28.395682 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.66s 2026-03-28 01:09:28.395692 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.65s 2026-03-28 01:09:28.395702 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 4.57s 2026-03-28 01:09:28.395712 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.48s 2026-03-28 01:09:28.395721 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.46s 2026-03-28 01:09:28.395731 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 4.43s 2026-03-28 01:09:28.395741 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 4.40s 2026-03-28 01:09:28.395750 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 4.30s 2026-03-28 01:09:28.395760 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.22s 2026-03-28 01:09:28.395769 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 4.13s 2026-03-28 01:09:28.395779 | orchestrator | 2026-03-28 01:09:28 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:28.395789 | orchestrator | 2026-03-28 01:09:28 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:28.395799 | orchestrator | 2026-03-28 01:09:28 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:28.395809 | orchestrator | 2026-03-28 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:31.463232 | orchestrator | 2026-03-28 01:09:31 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:09:31.463329 | orchestrator | 2026-03-28 01:09:31 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:31.463376 | orchestrator | 2026-03-28 01:09:31 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:31.465052 | orchestrator | 2026-03-28 01:09:31 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:31.466344 | orchestrator | 2026-03-28 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:34.538645 | orchestrator | 2026-03-28 01:09:34 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:09:34.539642 | orchestrator | 2026-03-28 01:09:34 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:34.540936 | orchestrator | 2026-03-28 01:09:34 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:34.541903 | orchestrator | 2026-03-28 01:09:34 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:34.541954 | orchestrator | 2026-03-28 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:37.599781 | orchestrator | 2026-03-28 01:09:37 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:09:37.603647 | orchestrator | 2026-03-28 01:09:37 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:37.605277 | orchestrator | 2026-03-28 01:09:37 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:37.610450 | orchestrator | 2026-03-28 01:09:37 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:37.610556 | orchestrator | 2026-03-28 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:40.642698 | orchestrator | 2026-03-28 01:09:40 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:09:40.646870 | orchestrator | 2026-03-28 01:09:40 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:40.648981 | orchestrator | 2026-03-28 01:09:40 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:40.650789 | orchestrator | 2026-03-28 01:09:40 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:40.650843 | orchestrator | 2026-03-28 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:43.703662 | orchestrator | 2026-03-28 01:09:43 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:09:43.703747 | orchestrator | 2026-03-28 01:09:43 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:43.703759 | orchestrator | 2026-03-28 01:09:43 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:43.703768 | orchestrator | 2026-03-28 01:09:43 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:43.703777 | orchestrator | 2026-03-28 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:46.749013 | orchestrator | 2026-03-28 01:09:46 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:09:46.749888 | orchestrator | 2026-03-28 01:09:46 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:46.751127 | orchestrator | 2026-03-28 01:09:46 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:46.752214 | orchestrator | 2026-03-28 01:09:46 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:46.752252 | orchestrator | 2026-03-28 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:49.836942 | orchestrator | 2026-03-28 01:09:49 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:09:49.837559 | orchestrator | 2026-03-28 01:09:49 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:49.838945 | orchestrator | 2026-03-28 01:09:49 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:49.840269 | orchestrator | 2026-03-28 01:09:49 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:49.840308 | orchestrator | 2026-03-28 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:52.876809 | orchestrator | 2026-03-28 01:09:52 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:09:52.878251 | orchestrator | 2026-03-28 01:09:52 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:52.879015 | orchestrator | 2026-03-28 01:09:52 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:52.879903 | orchestrator | 2026-03-28 01:09:52 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:52.880022 | orchestrator | 2026-03-28 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:55.911953 | orchestrator | 2026-03-28 01:09:55 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:09:55.912936 | orchestrator | 2026-03-28 01:09:55 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:55.913799 | orchestrator | 2026-03-28 01:09:55 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:55.914757 | orchestrator | 2026-03-28 01:09:55 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:55.914795 | orchestrator | 2026-03-28 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:58.972879 | orchestrator | 2026-03-28 01:09:58 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:09:58.972955 | orchestrator | 2026-03-28 01:09:58 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:09:58.972965 | orchestrator | 2026-03-28 01:09:58 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:09:58.972972 | orchestrator | 2026-03-28 01:09:58 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:09:58.972979 | orchestrator | 2026-03-28 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:01.992748 | orchestrator | 2026-03-28 01:10:01 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:01.993236 | orchestrator | 2026-03-28 01:10:01 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:01.994302 | orchestrator | 2026-03-28 01:10:01 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:01.995183 | orchestrator | 2026-03-28 01:10:01 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:01.995225 | orchestrator | 2026-03-28 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:05.055188 | orchestrator | 2026-03-28 01:10:05 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:05.055865 | orchestrator | 2026-03-28 01:10:05 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:05.057330 | orchestrator | 2026-03-28 01:10:05 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:05.058799 | orchestrator | 2026-03-28 01:10:05 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:05.058886 | orchestrator | 2026-03-28 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:08.097784 | orchestrator | 2026-03-28 01:10:08 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:08.099053 | orchestrator | 2026-03-28 01:10:08 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:08.100019 | orchestrator | 2026-03-28 01:10:08 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:08.101004 | orchestrator | 2026-03-28 01:10:08 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:08.101216 | orchestrator | 2026-03-28 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:11.163997 | orchestrator | 2026-03-28 01:10:11 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:11.164916 | orchestrator | 2026-03-28 01:10:11 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:11.165886 | orchestrator | 2026-03-28 01:10:11 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:11.166700 | orchestrator | 2026-03-28 01:10:11 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:11.166728 | orchestrator | 2026-03-28 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:14.209626 | orchestrator | 2026-03-28 01:10:14 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:14.209730 | orchestrator | 2026-03-28 01:10:14 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:14.209747 | orchestrator | 2026-03-28 01:10:14 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:14.209759 | orchestrator | 2026-03-28 01:10:14 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:14.209772 | orchestrator | 2026-03-28 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:17.258330 | orchestrator | 2026-03-28 01:10:17 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:17.262619 | orchestrator | 2026-03-28 01:10:17 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:17.266418 | orchestrator | 2026-03-28 01:10:17 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:17.268873 | orchestrator | 2026-03-28 01:10:17 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:17.268915 | orchestrator | 2026-03-28 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:20.312484 | orchestrator | 2026-03-28 01:10:20 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:20.312823 | orchestrator | 2026-03-28 01:10:20 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:20.314434 | orchestrator | 2026-03-28 01:10:20 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:20.315653 | orchestrator | 2026-03-28 01:10:20 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:20.315695 | orchestrator | 2026-03-28 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:23.357271 | orchestrator | 2026-03-28 01:10:23 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:23.358900 | orchestrator | 2026-03-28 01:10:23 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:23.359957 | orchestrator | 2026-03-28 01:10:23 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:23.360425 | orchestrator | 2026-03-28 01:10:23 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:23.360447 | orchestrator | 2026-03-28 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:26.401374 | orchestrator | 2026-03-28 01:10:26 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:26.401821 | orchestrator | 2026-03-28 01:10:26 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:26.402565 | orchestrator | 2026-03-28 01:10:26 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:26.403861 | orchestrator | 2026-03-28 01:10:26 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:26.403907 | orchestrator | 2026-03-28 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:29.446896 | orchestrator | 2026-03-28 01:10:29 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:29.447155 | orchestrator | 2026-03-28 01:10:29 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:29.447861 | orchestrator | 2026-03-28 01:10:29 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:29.448616 | orchestrator | 2026-03-28 01:10:29 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:29.448641 | orchestrator | 2026-03-28 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:32.485405 | orchestrator | 2026-03-28 01:10:32 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:32.487065 | orchestrator | 2026-03-28 01:10:32 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:32.488588 | orchestrator | 2026-03-28 01:10:32 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:32.490457 | orchestrator | 2026-03-28 01:10:32 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:32.490684 | orchestrator | 2026-03-28 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:35.536175 | orchestrator | 2026-03-28 01:10:35 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:35.536598 | orchestrator | 2026-03-28 01:10:35 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:35.539116 | orchestrator | 2026-03-28 01:10:35 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:35.541472 | orchestrator | 2026-03-28 01:10:35 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:35.542203 | orchestrator | 2026-03-28 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:38.584219 | orchestrator | 2026-03-28 01:10:38 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:38.586390 | orchestrator | 2026-03-28 01:10:38 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:38.589091 | orchestrator | 2026-03-28 01:10:38 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:38.591258 | orchestrator | 2026-03-28 01:10:38 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:38.591655 | orchestrator | 2026-03-28 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:41.633361 | orchestrator | 2026-03-28 01:10:41 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:41.633683 | orchestrator | 2026-03-28 01:10:41 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:41.635465 | orchestrator | 2026-03-28 01:10:41 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:41.638377 | orchestrator | 2026-03-28 01:10:41 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:41.638423 | orchestrator | 2026-03-28 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:44.788590 | orchestrator | 2026-03-28 01:10:44 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:44.792206 | orchestrator | 2026-03-28 01:10:44 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:44.797307 | orchestrator | 2026-03-28 01:10:44 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:44.799958 | orchestrator | 2026-03-28 01:10:44 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:44.800006 | orchestrator | 2026-03-28 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:47.846197 | orchestrator | 2026-03-28 01:10:47 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:47.847755 | orchestrator | 2026-03-28 01:10:47 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:47.849618 | orchestrator | 2026-03-28 01:10:47 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:47.853624 | orchestrator | 2026-03-28 01:10:47 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:47.854454 | orchestrator | 2026-03-28 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:50.908957 | orchestrator | 2026-03-28 01:10:50 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:50.910148 | orchestrator | 2026-03-28 01:10:50 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:50.911597 | orchestrator | 2026-03-28 01:10:50 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:50.913180 | orchestrator | 2026-03-28 01:10:50 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:50.913473 | orchestrator | 2026-03-28 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:53.964452 | orchestrator | 2026-03-28 01:10:53 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:53.966900 | orchestrator | 2026-03-28 01:10:53 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:53.969993 | orchestrator | 2026-03-28 01:10:53 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:53.972453 | orchestrator | 2026-03-28 01:10:53 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:53.972503 | orchestrator | 2026-03-28 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:57.020499 | orchestrator | 2026-03-28 01:10:57 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:10:57.022318 | orchestrator | 2026-03-28 01:10:57 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:10:57.024780 | orchestrator | 2026-03-28 01:10:57 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:10:57.026771 | orchestrator | 2026-03-28 01:10:57 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:10:57.027020 | orchestrator | 2026-03-28 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:00.073797 | orchestrator | 2026-03-28 01:11:00 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:00.074595 | orchestrator | 2026-03-28 01:11:00 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:11:00.077058 | orchestrator | 2026-03-28 01:11:00 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:00.078887 | orchestrator | 2026-03-28 01:11:00 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:00.078938 | orchestrator | 2026-03-28 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:03.120771 | orchestrator | 2026-03-28 01:11:03 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:03.123899 | orchestrator | 2026-03-28 01:11:03 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:11:03.126346 | orchestrator | 2026-03-28 01:11:03 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:03.128479 | orchestrator | 2026-03-28 01:11:03 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:03.128567 | orchestrator | 2026-03-28 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:06.177959 | orchestrator | 2026-03-28 01:11:06 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:06.178082 | orchestrator | 2026-03-28 01:11:06 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:11:06.179448 | orchestrator | 2026-03-28 01:11:06 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:06.182781 | orchestrator | 2026-03-28 01:11:06 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:06.182816 | orchestrator | 2026-03-28 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:09.224900 | orchestrator | 2026-03-28 01:11:09 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:09.224998 | orchestrator | 2026-03-28 01:11:09 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:11:09.226299 | orchestrator | 2026-03-28 01:11:09 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:09.226403 | orchestrator | 2026-03-28 01:11:09 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:09.226421 | orchestrator | 2026-03-28 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:12.275926 | orchestrator | 2026-03-28 01:11:12 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:12.278595 | orchestrator | 2026-03-28 01:11:12 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state STARTED 2026-03-28 01:11:12.280260 | orchestrator | 2026-03-28 01:11:12 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:12.281095 | orchestrator | 2026-03-28 01:11:12 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:12.281147 | orchestrator | 2026-03-28 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:15.325163 | orchestrator | 2026-03-28 01:11:15 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:15.329654 | orchestrator | 2026-03-28 01:11:15 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:15.333744 | orchestrator | 2026-03-28 01:11:15 | INFO  | Task bad304f0-3a14-48e9-bbea-93e4d0032985 is in state SUCCESS 2026-03-28 01:11:15.335880 | orchestrator | 2026-03-28 01:11:15.335948 | orchestrator | 2026-03-28 01:11:15.335968 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:11:15.335988 | orchestrator | 2026-03-28 01:11:15.335999 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:11:15.336010 | orchestrator | Saturday 28 March 2026 01:07:33 +0000 (0:00:00.304) 0:00:00.304 ******** 2026-03-28 01:11:15.336020 | orchestrator | ok: [testbed-manager] 2026-03-28 01:11:15.336032 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:11:15.336041 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:11:15.336051 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:11:15.336061 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:11:15.336071 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:11:15.336080 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:11:15.336090 | orchestrator | 2026-03-28 01:11:15.336100 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:11:15.336109 | orchestrator | Saturday 28 March 2026 01:07:34 +0000 (0:00:00.953) 0:00:01.258 ******** 2026-03-28 01:11:15.336120 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-28 01:11:15.336130 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-28 01:11:15.336140 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-28 01:11:15.336382 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-28 01:11:15.336402 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-28 01:11:15.336413 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-28 01:11:15.336422 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-28 01:11:15.336432 | orchestrator | 2026-03-28 01:11:15.336441 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-28 01:11:15.336451 | orchestrator | 2026-03-28 01:11:15.336463 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-28 01:11:15.336474 | orchestrator | Saturday 28 March 2026 01:07:35 +0000 (0:00:00.898) 0:00:02.157 ******** 2026-03-28 01:11:15.336508 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:11:15.336521 | orchestrator | 2026-03-28 01:11:15.336532 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-28 01:11:15.336543 | orchestrator | Saturday 28 March 2026 01:07:37 +0000 (0:00:01.624) 0:00:03.781 ******** 2026-03-28 01:11:15.336559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.336575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.336587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.336624 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 01:11:15.336656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.336670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.336682 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.336695 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.336828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.336854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.336890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.336910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.336921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.336931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.336942 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.336952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.336968 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.336986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.336996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.337015 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 01:11:15.337027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337053 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337070 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337091 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.337109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.337119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.337130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.337140 | orchestrator | 2026-03-28 01:11:15.337150 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-28 01:11:15.337161 | orchestrator | Saturday 28 March 2026 01:07:40 +0000 (0:00:03.082) 0:00:06.863 ******** 2026-03-28 01:11:15.337171 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:11:15.337309 | orchestrator | 2026-03-28 01:11:15.337327 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-28 01:11:15.337342 | orchestrator | Saturday 28 March 2026 01:07:42 +0000 (0:00:01.520) 0:00:08.384 ******** 2026-03-28 01:11:15.337365 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 01:11:15.337388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.337459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.337504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.337517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.337527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.337537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.337562 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.337572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.337582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.337592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.337610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337621 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337642 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.337673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.337683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.337693 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337710 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337720 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337731 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 01:11:15.337758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.337778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.338377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.338404 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.338415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.338436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.338446 | orchestrator | 2026-03-28 01:11:15.338456 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-28 01:11:15.338466 | orchestrator | Saturday 28 March 2026 01:07:47 +0000 (0:00:05.880) 0:00:14.264 ******** 2026-03-28 01:11:15.338511 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 01:11:15.338523 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.338533 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.338553 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 01:11:15.338565 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.338582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.338592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.338607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.338617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.338628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.338644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.338655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.338671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.338681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.338696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.338706 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:11:15.338717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.338727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.338737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.338747 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:15.338757 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:15.338773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.338789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.338799 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:15.338809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.338819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.338834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.338844 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:11:15.338854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.338866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.338885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.338902 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:11:15.338914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.338926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.338937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.338949 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:11:15.338960 | orchestrator | 2026-03-28 01:11:15.338971 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-28 01:11:15.338982 | orchestrator | Saturday 28 March 2026 01:07:49 +0000 (0:00:01.911) 0:00:16.176 ******** 2026-03-28 01:11:15.339000 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 01:11:15.339012 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.339023 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.339047 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 01:11:15.339061 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.339072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.339088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.339099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.339111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.339122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.339145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.339157 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:11:15.339169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.339180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.339192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.339206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.339217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.339227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.339252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.339263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.339273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:11:15.339283 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:15.339293 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:15.339303 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:15.339313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.339328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.339339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.339349 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:11:15.339359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.339375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.339391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.339401 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:11:15.339411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:11:15.339421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.339432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:11:15.339448 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:11:15.339464 | orchestrator | 2026-03-28 01:11:15.339507 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-28 01:11:15.339525 | orchestrator | Saturday 28 March 2026 01:07:52 +0000 (0:00:02.310) 0:00:18.486 ******** 2026-03-28 01:11:15.339541 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 01:11:15.339575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.339605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.339622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.339640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.339658 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.339674 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.339696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.339722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.339738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.339759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.339784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.339802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.339818 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.339843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.339858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.339886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.339903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.339930 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 01:11:15.339948 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.339965 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.339986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.340014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.340032 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.340056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.340075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.340093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.340109 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.340126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.340154 | orchestrator | 2026-03-28 01:11:15.340171 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-28 01:11:15.340196 | orchestrator | Saturday 28 March 2026 01:07:59 +0000 (0:00:07.764) 0:00:26.250 ******** 2026-03-28 01:11:15.340206 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:11:15.340217 | orchestrator | 2026-03-28 01:11:15.340229 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-28 01:11:15.340246 | orchestrator | Saturday 28 March 2026 01:08:02 +0000 (0:00:02.381) 0:00:28.632 ******** 2026-03-28 01:11:15.340262 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312387, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.192757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340280 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312387, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.192757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340306 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312387, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.192757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340324 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312409, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1967561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340341 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312387, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.192757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340358 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312387, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.192757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340394 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312387, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.192757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340413 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312409, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1967561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340426 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312387, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.192757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.340454 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312409, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1967561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340465 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312409, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1967561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340475 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312358, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1900992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340561 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312409, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1967561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340598 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312358, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1900992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340617 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312409, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1967561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340635 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312358, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1900992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340663 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312358, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1900992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340749 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312398, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1948538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340769 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312398, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1948538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340786 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312358, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1900992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340824 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312409, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1967561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.340842 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312358, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1900992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340858 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312398, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1948538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340886 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312398, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1948538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340904 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312351, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1883125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340921 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312351, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1883125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340946 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312351, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1883125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340965 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312398, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1948538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.340979 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312351, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1883125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341019 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312388, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.193394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341043 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312398, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1948538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341059 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312388, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.193394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341073 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312397, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1946838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341096 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312388, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.193394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341116 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312388, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.193394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341131 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312351, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1883125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341144 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312358, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1900992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.341157 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312351, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1883125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341179 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312388, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.193394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341193 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312389, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1936138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341217 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312397, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1946838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341237 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312397, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1946838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341274 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312397, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1946838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341296 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312397, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1946838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341361 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312388, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.193394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341391 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312389, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1936138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341417 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312389, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1936138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341431 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312389, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1936138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341451 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312389, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1936138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341465 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312369, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.190904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341478 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312398, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1948538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.341596 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312369, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.190904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341620 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312397, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1946838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341646 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312369, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.190904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341660 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312389, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1936138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341681 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312408, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1962843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341696 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312369, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.190904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341710 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312369, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.190904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341725 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312369, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.190904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341748 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312408, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1962843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341764 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312408, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1962843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341772 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312408, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1962843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341788 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312339, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1876261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341797 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312351, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1883125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.341805 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312339, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1876261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341813 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312408, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1962843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341827 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312424, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1990075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341841 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312408, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1962843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341849 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312339, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1876261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341857 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312339, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1876261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341870 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312405, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1960638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341879 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312424, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1990075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341887 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312339, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1876261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341907 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312424, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1990075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341916 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312339, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1876261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341924 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312355, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1889832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341932 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312388, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.193394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.341955 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312348, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.187809, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341963 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312424, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1990075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341971 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312424, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1990075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341990 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312405, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1960638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.341999 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312405, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1960638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342007 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312424, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1990075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342056 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312395, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1944525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342072 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312405, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1960638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342080 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312405, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1960638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342089 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312355, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1889832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342113 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312392, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1940153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342122 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312355, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1889832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342130 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312405, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1960638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342138 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312355, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1889832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342151 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312421, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1982644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342160 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:11:15.342170 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312397, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1946838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.342184 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312348, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.187809, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342197 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312355, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1889832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342206 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312355, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1889832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342215 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312348, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.187809, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342223 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312348, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.187809, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342235 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312348, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.187809, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342243 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312395, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1944525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342262 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312348, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.187809, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342282 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312395, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1944525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342291 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312395, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1944525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342300 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312395, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1944525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342308 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312395, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1944525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342320 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312392, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1940153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342329 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312389, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1936138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.342342 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312392, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1940153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342356 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312392, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1940153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342365 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312421, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1982644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342373 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:11:15.342381 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312392, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1940153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342390 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312421, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1982644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342398 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:15.342410 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312421, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1982644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342427 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:11:15.342436 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312392, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1940153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342445 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312421, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1982644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342455 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:15.342477 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312421, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1982644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:11:15.342510 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:15.342524 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312369, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.190904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.342537 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312408, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1962843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.342550 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312339, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1876261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.342569 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312424, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1990075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.342591 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312405, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1960638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.342604 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312355, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1889832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.342622 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312348, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.187809, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.342634 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312395, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1944525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.342647 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312392, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1940153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.342660 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312421, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1982644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:11:15.342682 | orchestrator | 2026-03-28 01:11:15.342703 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-28 01:11:15.342718 | orchestrator | Saturday 28 March 2026 01:08:34 +0000 (0:00:32.183) 0:01:00.816 ******** 2026-03-28 01:11:15.342731 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:11:15.342745 | orchestrator | 2026-03-28 01:11:15.342757 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-28 01:11:15.342765 | orchestrator | Saturday 28 March 2026 01:08:35 +0000 (0:00:01.035) 0:01:01.851 ******** 2026-03-28 01:11:15.342774 | orchestrator | [WARNING]: Skipped 2026-03-28 01:11:15.342782 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.342791 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-28 01:11:15.342799 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.342807 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-28 01:11:15.342815 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:11:15.342823 | orchestrator | [WARNING]: Skipped 2026-03-28 01:11:15.342831 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.342839 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-28 01:11:15.342847 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.342855 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-28 01:11:15.342863 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:11:15.342871 | orchestrator | [WARNING]: Skipped 2026-03-28 01:11:15.342879 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.342887 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-28 01:11:15.342895 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.342903 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-28 01:11:15.342911 | orchestrator | [WARNING]: Skipped 2026-03-28 01:11:15.342919 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.342927 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-28 01:11:15.342935 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.342943 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-28 01:11:15.342951 | orchestrator | [WARNING]: Skipped 2026-03-28 01:11:15.342959 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.342973 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-28 01:11:15.342982 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.342990 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-28 01:11:15.342998 | orchestrator | [WARNING]: Skipped 2026-03-28 01:11:15.343006 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.343014 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-28 01:11:15.343022 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.343030 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-28 01:11:15.343038 | orchestrator | [WARNING]: Skipped 2026-03-28 01:11:15.343047 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.343055 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-28 01:11:15.343063 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:11:15.343071 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-28 01:11:15.343079 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 01:11:15.343094 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 01:11:15.343102 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 01:11:15.343110 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:11:15.343118 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 01:11:15.343126 | orchestrator | 2026-03-28 01:11:15.343134 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-28 01:11:15.343142 | orchestrator | Saturday 28 March 2026 01:08:38 +0000 (0:00:03.028) 0:01:04.880 ******** 2026-03-28 01:11:15.343150 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:11:15.343159 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:15.343167 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:11:15.343175 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:15.343183 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:11:15.343191 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:15.343200 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:11:15.343207 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:11:15.343215 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:11:15.343223 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:11:15.343231 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:11:15.343239 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:11:15.343247 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-28 01:11:15.343256 | orchestrator | 2026-03-28 01:11:15.343264 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-28 01:11:15.343276 | orchestrator | Saturday 28 March 2026 01:08:58 +0000 (0:00:19.571) 0:01:24.452 ******** 2026-03-28 01:11:15.343285 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:11:15.343293 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:11:15.343301 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:15.343309 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:15.343318 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:11:15.343326 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:11:15.343335 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:11:15.343343 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:15.343351 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:11:15.343359 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:11:15.343367 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:11:15.343375 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:11:15.343383 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-28 01:11:15.343391 | orchestrator | 2026-03-28 01:11:15.343399 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-28 01:11:15.343407 | orchestrator | Saturday 28 March 2026 01:09:01 +0000 (0:00:03.493) 0:01:27.946 ******** 2026-03-28 01:11:15.343415 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:11:15.343424 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:11:15.343438 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:11:15.343446 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:15.343454 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:15.343462 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:15.343474 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-28 01:11:15.343544 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:11:15.343559 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:11:15.343572 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:11:15.343586 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:11:15.343598 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:11:15.343611 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:11:15.343624 | orchestrator | 2026-03-28 01:11:15.343637 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-28 01:11:15.343649 | orchestrator | Saturday 28 March 2026 01:09:04 +0000 (0:00:02.679) 0:01:30.626 ******** 2026-03-28 01:11:15.343663 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:11:15.343675 | orchestrator | 2026-03-28 01:11:15.343687 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-28 01:11:15.343701 | orchestrator | Saturday 28 March 2026 01:09:05 +0000 (0:00:00.905) 0:01:31.531 ******** 2026-03-28 01:11:15.343714 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:11:15.343726 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:15.343738 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:15.343749 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:15.343760 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:11:15.343772 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:11:15.343786 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:11:15.343797 | orchestrator | 2026-03-28 01:11:15.343810 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-28 01:11:15.343821 | orchestrator | Saturday 28 March 2026 01:09:06 +0000 (0:00:01.107) 0:01:32.639 ******** 2026-03-28 01:11:15.343832 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:11:15.343844 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:11:15.343856 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:11:15.343868 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:11:15.343880 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:15.343892 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:15.343905 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:15.343918 | orchestrator | 2026-03-28 01:11:15.343932 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-28 01:11:15.343945 | orchestrator | Saturday 28 March 2026 01:09:09 +0000 (0:00:03.498) 0:01:36.138 ******** 2026-03-28 01:11:15.343958 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:11:15.343972 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:11:15.343984 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:11:15.343997 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:11:15.344011 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:15.344023 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:15.344047 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:11:15.344061 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:11:15.344069 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:11:15.344084 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:15.344090 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:11:15.344097 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:11:15.344104 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:11:15.344110 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:11:15.344117 | orchestrator | 2026-03-28 01:11:15.344124 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-28 01:11:15.344130 | orchestrator | Saturday 28 March 2026 01:09:12 +0000 (0:00:02.235) 0:01:38.373 ******** 2026-03-28 01:11:15.344137 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:11:15.344144 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:15.344151 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:11:15.344158 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:15.344164 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:11:15.344171 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:15.344178 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-28 01:11:15.344185 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:11:15.344192 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:11:15.344198 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:11:15.344205 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:11:15.344212 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:11:15.344218 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:11:15.344225 | orchestrator | 2026-03-28 01:11:15.344232 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-28 01:11:15.344248 | orchestrator | Saturday 28 March 2026 01:09:14 +0000 (0:00:02.316) 0:01:40.690 ******** 2026-03-28 01:11:15.344255 | orchestrator | [WARNING]: Skipped 2026-03-28 01:11:15.344262 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-28 01:11:15.344269 | orchestrator | due to this access issue: 2026-03-28 01:11:15.344276 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-28 01:11:15.344282 | orchestrator | not a directory 2026-03-28 01:11:15.344289 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:11:15.344296 | orchestrator | 2026-03-28 01:11:15.344302 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-28 01:11:15.344309 | orchestrator | Saturday 28 March 2026 01:09:15 +0000 (0:00:01.577) 0:01:42.267 ******** 2026-03-28 01:11:15.344316 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:11:15.344322 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:15.344329 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:15.344336 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:15.344343 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:11:15.344349 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:11:15.344356 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:11:15.344362 | orchestrator | 2026-03-28 01:11:15.344369 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-28 01:11:15.344376 | orchestrator | Saturday 28 March 2026 01:09:17 +0000 (0:00:01.576) 0:01:43.845 ******** 2026-03-28 01:11:15.344382 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:11:15.344389 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:15.344400 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:15.344407 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:15.344413 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:11:15.344420 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:11:15.344427 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:11:15.344434 | orchestrator | 2026-03-28 01:11:15.344440 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-28 01:11:15.344447 | orchestrator | Saturday 28 March 2026 01:09:19 +0000 (0:00:01.718) 0:01:45.564 ******** 2026-03-28 01:11:15.344455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.344467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.344476 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 01:11:15.344513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.344528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.344535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.344548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.344556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.344567 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:11:15.344574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.344581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.344588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.344602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.344610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.344622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.344630 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.344644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.344651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.344658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.344666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.344678 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.344690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.344699 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 01:11:15.344710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.344718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:11:15.344725 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.344737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.344749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.344756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:11:15.344763 | orchestrator | 2026-03-28 01:11:15.344770 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-28 01:11:15.344777 | orchestrator | Saturday 28 March 2026 01:09:25 +0000 (0:00:05.827) 0:01:51.392 ******** 2026-03-28 01:11:15.344784 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-28 01:11:15.344791 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:11:15.344798 | orchestrator | 2026-03-28 01:11:15.344805 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:11:15.344811 | orchestrator | Saturday 28 March 2026 01:09:26 +0000 (0:00:01.619) 0:01:53.011 ******** 2026-03-28 01:11:15.344818 | orchestrator | 2026-03-28 01:11:15.344825 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:11:15.344831 | orchestrator | Saturday 28 March 2026 01:09:26 +0000 (0:00:00.086) 0:01:53.098 ******** 2026-03-28 01:11:15.344838 | orchestrator | 2026-03-28 01:11:15.344845 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:11:15.344851 | orchestrator | Saturday 28 March 2026 01:09:26 +0000 (0:00:00.072) 0:01:53.171 ******** 2026-03-28 01:11:15.344858 | orchestrator | 2026-03-28 01:11:15.344865 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:11:15.344872 | orchestrator | Saturday 28 March 2026 01:09:26 +0000 (0:00:00.060) 0:01:53.231 ******** 2026-03-28 01:11:15.344879 | orchestrator | 2026-03-28 01:11:15.344886 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:11:15.344896 | orchestrator | Saturday 28 March 2026 01:09:27 +0000 (0:00:00.197) 0:01:53.428 ******** 2026-03-28 01:11:15.344903 | orchestrator | 2026-03-28 01:11:15.344910 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:11:15.344917 | orchestrator | Saturday 28 March 2026 01:09:27 +0000 (0:00:00.063) 0:01:53.491 ******** 2026-03-28 01:11:15.344924 | orchestrator | 2026-03-28 01:11:15.344930 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:11:15.344937 | orchestrator | Saturday 28 March 2026 01:09:27 +0000 (0:00:00.065) 0:01:53.557 ******** 2026-03-28 01:11:15.344944 | orchestrator | 2026-03-28 01:11:15.344951 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-28 01:11:15.344957 | orchestrator | Saturday 28 March 2026 01:09:27 +0000 (0:00:00.106) 0:01:53.663 ******** 2026-03-28 01:11:15.344964 | orchestrator | changed: [testbed-manager] 2026-03-28 01:11:15.344971 | orchestrator | 2026-03-28 01:11:15.344978 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-28 01:11:15.344984 | orchestrator | Saturday 28 March 2026 01:09:44 +0000 (0:00:17.491) 0:02:11.155 ******** 2026-03-28 01:11:15.344991 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:15.344998 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:15.345005 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:11:15.345016 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:15.345023 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:11:15.345029 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:11:15.345036 | orchestrator | changed: [testbed-manager] 2026-03-28 01:11:15.345043 | orchestrator | 2026-03-28 01:11:15.345050 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-28 01:11:15.345056 | orchestrator | Saturday 28 March 2026 01:10:04 +0000 (0:00:19.702) 0:02:30.857 ******** 2026-03-28 01:11:15.345063 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:15.345070 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:15.345077 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:15.345084 | orchestrator | 2026-03-28 01:11:15.345090 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-28 01:11:15.345097 | orchestrator | Saturday 28 March 2026 01:10:17 +0000 (0:00:12.500) 0:02:43.358 ******** 2026-03-28 01:11:15.345104 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:15.345111 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:15.345117 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:15.345124 | orchestrator | 2026-03-28 01:11:15.345131 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-28 01:11:15.345138 | orchestrator | Saturday 28 March 2026 01:10:23 +0000 (0:00:06.874) 0:02:50.233 ******** 2026-03-28 01:11:15.345145 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:15.345151 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:11:15.345158 | orchestrator | changed: [testbed-manager] 2026-03-28 01:11:15.345165 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:15.345172 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:15.345183 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:11:15.345191 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:11:15.345197 | orchestrator | 2026-03-28 01:11:15.345204 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-28 01:11:15.345211 | orchestrator | Saturday 28 March 2026 01:10:39 +0000 (0:00:16.008) 0:03:06.241 ******** 2026-03-28 01:11:15.345218 | orchestrator | changed: [testbed-manager] 2026-03-28 01:11:15.345225 | orchestrator | 2026-03-28 01:11:15.345232 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-28 01:11:15.345238 | orchestrator | Saturday 28 March 2026 01:10:49 +0000 (0:00:09.147) 0:03:15.389 ******** 2026-03-28 01:11:15.345245 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:15.345252 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:15.345259 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:15.345266 | orchestrator | 2026-03-28 01:11:15.345273 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-28 01:11:15.345280 | orchestrator | Saturday 28 March 2026 01:10:55 +0000 (0:00:06.134) 0:03:21.523 ******** 2026-03-28 01:11:15.345286 | orchestrator | changed: [testbed-manager] 2026-03-28 01:11:15.345293 | orchestrator | 2026-03-28 01:11:15.345300 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-28 01:11:15.345307 | orchestrator | Saturday 28 March 2026 01:11:00 +0000 (0:00:05.671) 0:03:27.194 ******** 2026-03-28 01:11:15.345314 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:11:15.345320 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:11:15.345327 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:11:15.345334 | orchestrator | 2026-03-28 01:11:15.345341 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:11:15.345348 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-28 01:11:15.345355 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 01:11:15.345362 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 01:11:15.345373 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 01:11:15.345380 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:11:15.345387 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:11:15.345397 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:11:15.345404 | orchestrator | 2026-03-28 01:11:15.345411 | orchestrator | 2026-03-28 01:11:15.345418 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:11:15.345428 | orchestrator | Saturday 28 March 2026 01:11:11 +0000 (0:00:10.870) 0:03:38.065 ******** 2026-03-28 01:11:15.345438 | orchestrator | =============================================================================== 2026-03-28 01:11:15.345447 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 32.18s 2026-03-28 01:11:15.345457 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 19.70s 2026-03-28 01:11:15.345466 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 19.57s 2026-03-28 01:11:15.345475 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.49s 2026-03-28 01:11:15.345515 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.01s 2026-03-28 01:11:15.345526 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.50s 2026-03-28 01:11:15.345536 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.87s 2026-03-28 01:11:15.345548 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.15s 2026-03-28 01:11:15.345559 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.76s 2026-03-28 01:11:15.345571 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.87s 2026-03-28 01:11:15.345583 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.13s 2026-03-28 01:11:15.345594 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.88s 2026-03-28 01:11:15.345606 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.83s 2026-03-28 01:11:15.345618 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.67s 2026-03-28 01:11:15.345629 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.50s 2026-03-28 01:11:15.345640 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.49s 2026-03-28 01:11:15.345647 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.08s 2026-03-28 01:11:15.345653 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.03s 2026-03-28 01:11:15.345660 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.68s 2026-03-28 01:11:15.345667 | orchestrator | prometheus : Find custom prometheus alert rules files ------------------- 2.38s 2026-03-28 01:11:15.345679 | orchestrator | 2026-03-28 01:11:15 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:15.345687 | orchestrator | 2026-03-28 01:11:15 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:15.345693 | orchestrator | 2026-03-28 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:18.496777 | orchestrator | 2026-03-28 01:11:18 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:18.497463 | orchestrator | 2026-03-28 01:11:18 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:18.498738 | orchestrator | 2026-03-28 01:11:18 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:18.502799 | orchestrator | 2026-03-28 01:11:18 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:18.508564 | orchestrator | 2026-03-28 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:21.539565 | orchestrator | 2026-03-28 01:11:21 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:21.540018 | orchestrator | 2026-03-28 01:11:21 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:21.541083 | orchestrator | 2026-03-28 01:11:21 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:21.543829 | orchestrator | 2026-03-28 01:11:21 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:21.543898 | orchestrator | 2026-03-28 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:24.585143 | orchestrator | 2026-03-28 01:11:24 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:24.587892 | orchestrator | 2026-03-28 01:11:24 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:24.589784 | orchestrator | 2026-03-28 01:11:24 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:24.591883 | orchestrator | 2026-03-28 01:11:24 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:24.591956 | orchestrator | 2026-03-28 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:27.642503 | orchestrator | 2026-03-28 01:11:27 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:27.648335 | orchestrator | 2026-03-28 01:11:27 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:27.650415 | orchestrator | 2026-03-28 01:11:27 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:27.651818 | orchestrator | 2026-03-28 01:11:27 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:27.651965 | orchestrator | 2026-03-28 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:30.692718 | orchestrator | 2026-03-28 01:11:30 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:30.693217 | orchestrator | 2026-03-28 01:11:30 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:30.693694 | orchestrator | 2026-03-28 01:11:30 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:30.694688 | orchestrator | 2026-03-28 01:11:30 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:30.694962 | orchestrator | 2026-03-28 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:33.765016 | orchestrator | 2026-03-28 01:11:33 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:33.766550 | orchestrator | 2026-03-28 01:11:33 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:33.769056 | orchestrator | 2026-03-28 01:11:33 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:33.770116 | orchestrator | 2026-03-28 01:11:33 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:33.770165 | orchestrator | 2026-03-28 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:36.860525 | orchestrator | 2026-03-28 01:11:36 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:36.869293 | orchestrator | 2026-03-28 01:11:36 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:36.869379 | orchestrator | 2026-03-28 01:11:36 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:36.869388 | orchestrator | 2026-03-28 01:11:36 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:36.869396 | orchestrator | 2026-03-28 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:39.923773 | orchestrator | 2026-03-28 01:11:39 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:39.926264 | orchestrator | 2026-03-28 01:11:39 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:39.928816 | orchestrator | 2026-03-28 01:11:39 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:39.931759 | orchestrator | 2026-03-28 01:11:39 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:39.931822 | orchestrator | 2026-03-28 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:42.981361 | orchestrator | 2026-03-28 01:11:42 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:42.982660 | orchestrator | 2026-03-28 01:11:42 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:42.983737 | orchestrator | 2026-03-28 01:11:42 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:42.985528 | orchestrator | 2026-03-28 01:11:42 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:42.985559 | orchestrator | 2026-03-28 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:46.045622 | orchestrator | 2026-03-28 01:11:46 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:46.046788 | orchestrator | 2026-03-28 01:11:46 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:46.047136 | orchestrator | 2026-03-28 01:11:46 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:46.048713 | orchestrator | 2026-03-28 01:11:46 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state STARTED 2026-03-28 01:11:46.048755 | orchestrator | 2026-03-28 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:49.089510 | orchestrator | 2026-03-28 01:11:49 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:49.092378 | orchestrator | 2026-03-28 01:11:49 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:49.096786 | orchestrator | 2026-03-28 01:11:49 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:11:49.105324 | orchestrator | 2026-03-28 01:11:49 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state STARTED 2026-03-28 01:11:49.107069 | orchestrator | 2026-03-28 01:11:49 | INFO  | Task 2616b8bc-2a24-4652-a61e-9f501c83d8f9 is in state SUCCESS 2026-03-28 01:11:49.110998 | orchestrator | 2026-03-28 01:11:49.111059 | orchestrator | 2026-03-28 01:11:49.111065 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:11:49.111071 | orchestrator | 2026-03-28 01:11:49.111075 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:11:49.111080 | orchestrator | Saturday 28 March 2026 01:08:24 +0000 (0:00:00.206) 0:00:00.206 ******** 2026-03-28 01:11:49.111084 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:11:49.111089 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:11:49.111112 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:11:49.111116 | orchestrator | 2026-03-28 01:11:49.111120 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:11:49.111124 | orchestrator | Saturday 28 March 2026 01:08:24 +0000 (0:00:00.249) 0:00:00.455 ******** 2026-03-28 01:11:49.111128 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-28 01:11:49.111134 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-28 01:11:49.111137 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-28 01:11:49.111141 | orchestrator | 2026-03-28 01:11:49.111145 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-28 01:11:49.111149 | orchestrator | 2026-03-28 01:11:49.111153 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 01:11:49.111157 | orchestrator | Saturday 28 March 2026 01:08:24 +0000 (0:00:00.330) 0:00:00.786 ******** 2026-03-28 01:11:49.111161 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:49.111166 | orchestrator | 2026-03-28 01:11:49.111170 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-28 01:11:49.111176 | orchestrator | Saturday 28 March 2026 01:08:25 +0000 (0:00:00.491) 0:00:01.277 ******** 2026-03-28 01:11:49.111182 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-28 01:11:49.111188 | orchestrator | 2026-03-28 01:11:49.111193 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-28 01:11:49.111199 | orchestrator | Saturday 28 March 2026 01:08:28 +0000 (0:00:03.626) 0:00:04.903 ******** 2026-03-28 01:11:49.111205 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-28 01:11:49.111211 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-28 01:11:49.111218 | orchestrator | 2026-03-28 01:11:49.111224 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-28 01:11:49.111230 | orchestrator | Saturday 28 March 2026 01:08:35 +0000 (0:00:06.122) 0:00:11.026 ******** 2026-03-28 01:11:49.111238 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:11:49.111242 | orchestrator | 2026-03-28 01:11:49.111246 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-28 01:11:49.111250 | orchestrator | Saturday 28 March 2026 01:08:38 +0000 (0:00:03.688) 0:00:14.714 ******** 2026-03-28 01:11:49.111254 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:11:49.111258 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-28 01:11:49.111262 | orchestrator | 2026-03-28 01:11:49.111266 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-28 01:11:49.111270 | orchestrator | Saturday 28 March 2026 01:08:43 +0000 (0:00:04.327) 0:00:19.041 ******** 2026-03-28 01:11:49.111274 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:11:49.111278 | orchestrator | 2026-03-28 01:11:49.111281 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-28 01:11:49.111285 | orchestrator | Saturday 28 March 2026 01:08:46 +0000 (0:00:03.799) 0:00:22.841 ******** 2026-03-28 01:11:49.111289 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-28 01:11:49.111293 | orchestrator | 2026-03-28 01:11:49.111297 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-28 01:11:49.111300 | orchestrator | Saturday 28 March 2026 01:08:50 +0000 (0:00:03.584) 0:00:26.425 ******** 2026-03-28 01:11:49.111329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:11:49.111342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:11:49.111349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:11:49.111357 | orchestrator | 2026-03-28 01:11:49.111361 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 01:11:49.111364 | orchestrator | Saturday 28 March 2026 01:08:53 +0000 (0:00:03.287) 0:00:29.713 ******** 2026-03-28 01:11:49.111369 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:49.111373 | orchestrator | 2026-03-28 01:11:49.111379 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-28 01:11:49.111383 | orchestrator | Saturday 28 March 2026 01:08:54 +0000 (0:00:00.794) 0:00:30.508 ******** 2026-03-28 01:11:49.111386 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:49.111390 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:49.111394 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:49.111398 | orchestrator | 2026-03-28 01:11:49.111402 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-28 01:11:49.111406 | orchestrator | Saturday 28 March 2026 01:08:58 +0000 (0:00:04.100) 0:00:34.608 ******** 2026-03-28 01:11:49.111410 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:11:49.111414 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:11:49.111418 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:11:49.111422 | orchestrator | 2026-03-28 01:11:49.111426 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-28 01:11:49.111429 | orchestrator | Saturday 28 March 2026 01:09:00 +0000 (0:00:02.261) 0:00:36.870 ******** 2026-03-28 01:11:49.111433 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:11:49.111437 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:11:49.111458 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:11:49.111463 | orchestrator | 2026-03-28 01:11:49.111467 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-28 01:11:49.111471 | orchestrator | Saturday 28 March 2026 01:09:02 +0000 (0:00:01.474) 0:00:38.344 ******** 2026-03-28 01:11:49.111475 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:11:49.111478 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:11:49.111482 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:11:49.111486 | orchestrator | 2026-03-28 01:11:49.111490 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-28 01:11:49.111493 | orchestrator | Saturday 28 March 2026 01:09:04 +0000 (0:00:01.709) 0:00:40.054 ******** 2026-03-28 01:11:49.111497 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:49.111501 | orchestrator | 2026-03-28 01:11:49.111505 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-28 01:11:49.111509 | orchestrator | Saturday 28 March 2026 01:09:04 +0000 (0:00:00.140) 0:00:40.195 ******** 2026-03-28 01:11:49.111512 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:49.111516 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:49.111520 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:49.111524 | orchestrator | 2026-03-28 01:11:49.111531 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 01:11:49.111535 | orchestrator | Saturday 28 March 2026 01:09:04 +0000 (0:00:00.345) 0:00:40.540 ******** 2026-03-28 01:11:49.111538 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:49.111557 | orchestrator | 2026-03-28 01:11:49.111561 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-28 01:11:49.111565 | orchestrator | Saturday 28 March 2026 01:09:05 +0000 (0:00:00.715) 0:00:41.255 ******** 2026-03-28 01:11:49.111575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:11:49.111580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:11:49.111585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:11:49.111593 | orchestrator | 2026-03-28 01:11:49.111597 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-28 01:11:49.111602 | orchestrator | Saturday 28 March 2026 01:09:11 +0000 (0:00:05.901) 0:00:47.156 ******** 2026-03-28 01:11:49.111613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:11:49.111618 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:49.111623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:11:49.111631 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:49.111642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:11:49.111647 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:49.111652 | orchestrator | 2026-03-28 01:11:49.111656 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-28 01:11:49.111660 | orchestrator | Saturday 28 March 2026 01:09:16 +0000 (0:00:05.133) 0:00:52.290 ******** 2026-03-28 01:11:49.111665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:11:49.111673 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:49.111683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:11:49.111688 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:49.111693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:11:49.111709 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:49.111716 | orchestrator | 2026-03-28 01:11:49.111722 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-28 01:11:49.111728 | orchestrator | Saturday 28 March 2026 01:09:22 +0000 (0:00:06.011) 0:00:58.301 ******** 2026-03-28 01:11:49.111734 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:49.111739 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:49.111745 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:49.111751 | orchestrator | 2026-03-28 01:11:49.111756 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-28 01:11:49.111762 | orchestrator | Saturday 28 March 2026 01:09:27 +0000 (0:00:04.751) 0:01:03.053 ******** 2026-03-28 01:11:49.111772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:11:49.111785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:11:49.111798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:11:49.111805 | orchestrator | 2026-03-28 01:11:49.111812 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-28 01:11:49.111818 | orchestrator | Saturday 28 March 2026 01:09:33 +0000 (0:00:05.936) 0:01:08.989 ******** 2026-03-28 01:11:49.111822 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:49.111825 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:49.111829 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:49.111833 | orchestrator | 2026-03-28 01:11:49.111837 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-28 01:11:49.111843 | orchestrator | Saturday 28 March 2026 01:09:42 +0000 (0:00:09.873) 0:01:18.863 ******** 2026-03-28 01:11:49.111847 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:49.111851 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:49.111854 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:49.111858 | orchestrator | 2026-03-28 01:11:49.111862 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-28 01:11:49.111866 | orchestrator | Saturday 28 March 2026 01:09:53 +0000 (0:00:10.374) 0:01:29.237 ******** 2026-03-28 01:11:49.111870 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:49.111977 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:49.111983 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:49.111987 | orchestrator | 2026-03-28 01:11:49.111991 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-28 01:11:49.111995 | orchestrator | Saturday 28 March 2026 01:10:00 +0000 (0:00:06.837) 0:01:36.074 ******** 2026-03-28 01:11:49.111999 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:49.112007 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:49.112011 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:49.112015 | orchestrator | 2026-03-28 01:11:49.112018 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-28 01:11:49.112022 | orchestrator | Saturday 28 March 2026 01:10:06 +0000 (0:00:06.607) 0:01:42.682 ******** 2026-03-28 01:11:49.112026 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:49.112030 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:49.112034 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:49.112037 | orchestrator | 2026-03-28 01:11:49.112041 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-28 01:11:49.112045 | orchestrator | Saturday 28 March 2026 01:10:12 +0000 (0:00:05.348) 0:01:48.031 ******** 2026-03-28 01:11:49.112049 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:49.112052 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:49.112056 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:49.112060 | orchestrator | 2026-03-28 01:11:49.112063 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-28 01:11:49.112067 | orchestrator | Saturday 28 March 2026 01:10:12 +0000 (0:00:00.359) 0:01:48.390 ******** 2026-03-28 01:11:49.112071 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-28 01:11:49.112075 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:49.112079 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-28 01:11:49.112083 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:49.112087 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-28 01:11:49.112090 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:49.112094 | orchestrator | 2026-03-28 01:11:49.112098 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-28 01:11:49.112102 | orchestrator | Saturday 28 March 2026 01:10:16 +0000 (0:00:04.081) 0:01:52.472 ******** 2026-03-28 01:11:49.112105 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:49.112109 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:49.112113 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:49.112117 | orchestrator | 2026-03-28 01:11:49.112120 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-28 01:11:49.112124 | orchestrator | Saturday 28 March 2026 01:10:22 +0000 (0:00:05.961) 0:01:58.434 ******** 2026-03-28 01:11:49.112128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:11:49.112144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:11:49.112148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:11:49.112153 | orchestrator | 2026-03-28 01:11:49.112156 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 01:11:49.112160 | orchestrator | Saturday 28 March 2026 01:10:29 +0000 (0:00:06.917) 0:02:05.351 ******** 2026-03-28 01:11:49.112164 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:49.112168 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:49.112186 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:49.112190 | orchestrator | 2026-03-28 01:11:49.112194 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-28 01:11:49.112198 | orchestrator | Saturday 28 March 2026 01:10:29 +0000 (0:00:00.389) 0:02:05.741 ******** 2026-03-28 01:11:49.112201 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:49.112205 | orchestrator | 2026-03-28 01:11:49.112209 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-28 01:11:49.112213 | orchestrator | Saturday 28 March 2026 01:10:32 +0000 (0:00:02.526) 0:02:08.267 ******** 2026-03-28 01:11:49.112216 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:49.112220 | orchestrator | 2026-03-28 01:11:49.112224 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-28 01:11:49.112227 | orchestrator | Saturday 28 March 2026 01:10:35 +0000 (0:00:02.969) 0:02:11.236 ******** 2026-03-28 01:11:49.112234 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:49.112238 | orchestrator | 2026-03-28 01:11:49.112242 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-28 01:11:49.112245 | orchestrator | Saturday 28 March 2026 01:10:37 +0000 (0:00:02.213) 0:02:13.450 ******** 2026-03-28 01:11:49.112249 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:49.112255 | orchestrator | 2026-03-28 01:11:49.112261 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-28 01:11:49.112269 | orchestrator | Saturday 28 March 2026 01:11:08 +0000 (0:00:30.886) 0:02:44.337 ******** 2026-03-28 01:11:49.112276 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:49.112282 | orchestrator | 2026-03-28 01:11:49.112288 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-28 01:11:49.112294 | orchestrator | Saturday 28 March 2026 01:11:10 +0000 (0:00:02.561) 0:02:46.898 ******** 2026-03-28 01:11:49.112302 | orchestrator | 2026-03-28 01:11:49.112306 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-28 01:11:49.112310 | orchestrator | Saturday 28 March 2026 01:11:11 +0000 (0:00:00.081) 0:02:46.980 ******** 2026-03-28 01:11:49.112313 | orchestrator | 2026-03-28 01:11:49.112317 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-28 01:11:49.112322 | orchestrator | Saturday 28 March 2026 01:11:11 +0000 (0:00:00.064) 0:02:47.045 ******** 2026-03-28 01:11:49.112327 | orchestrator | 2026-03-28 01:11:49.112334 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-28 01:11:49.112340 | orchestrator | Saturday 28 March 2026 01:11:11 +0000 (0:00:00.075) 0:02:47.120 ******** 2026-03-28 01:11:49.112346 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:49.112352 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:49.112358 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:49.112365 | orchestrator | 2026-03-28 01:11:49.112371 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:11:49.112379 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:11:49.112385 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 01:11:49.112389 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 01:11:49.112393 | orchestrator | 2026-03-28 01:11:49.112397 | orchestrator | 2026-03-28 01:11:49.112400 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:11:49.112404 | orchestrator | Saturday 28 March 2026 01:11:46 +0000 (0:00:35.119) 0:03:22.239 ******** 2026-03-28 01:11:49.112408 | orchestrator | =============================================================================== 2026-03-28 01:11:49.112412 | orchestrator | glance : Restart glance-api container ---------------------------------- 35.12s 2026-03-28 01:11:49.112416 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.89s 2026-03-28 01:11:49.112423 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 10.37s 2026-03-28 01:11:49.112427 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.87s 2026-03-28 01:11:49.112431 | orchestrator | glance : Check glance containers ---------------------------------------- 6.92s 2026-03-28 01:11:49.112435 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.84s 2026-03-28 01:11:49.112439 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.61s 2026-03-28 01:11:49.112467 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.12s 2026-03-28 01:11:49.112470 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 6.01s 2026-03-28 01:11:49.112474 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.96s 2026-03-28 01:11:49.112478 | orchestrator | glance : Copying over config.json files for services -------------------- 5.94s 2026-03-28 01:11:49.112482 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.90s 2026-03-28 01:11:49.112485 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.35s 2026-03-28 01:11:49.112489 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.13s 2026-03-28 01:11:49.112493 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.75s 2026-03-28 01:11:49.112497 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.33s 2026-03-28 01:11:49.112501 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.10s 2026-03-28 01:11:49.112504 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.08s 2026-03-28 01:11:49.112508 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.80s 2026-03-28 01:11:49.112512 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.69s 2026-03-28 01:11:49.112516 | orchestrator | 2026-03-28 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:52.148621 | orchestrator | 2026-03-28 01:11:52 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:52.149154 | orchestrator | 2026-03-28 01:11:52 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:52.151509 | orchestrator | 2026-03-28 01:11:52 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:11:52.156068 | orchestrator | 2026-03-28 01:11:52 | INFO  | Task 5b8964c9-3610-48f6-9466-f3e2543f58ac is in state SUCCESS 2026-03-28 01:11:52.156208 | orchestrator | 2026-03-28 01:11:52.157468 | orchestrator | 2026-03-28 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:52.159045 | orchestrator | 2026-03-28 01:11:52.159116 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:11:52.159133 | orchestrator | 2026-03-28 01:11:52.159145 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:11:52.159158 | orchestrator | Saturday 28 March 2026 01:08:32 +0000 (0:00:00.301) 0:00:00.301 ******** 2026-03-28 01:11:52.159168 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:11:52.159179 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:11:52.159189 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:11:52.159199 | orchestrator | 2026-03-28 01:11:52.159209 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:11:52.159219 | orchestrator | Saturday 28 March 2026 01:08:32 +0000 (0:00:00.332) 0:00:00.634 ******** 2026-03-28 01:11:52.159229 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-28 01:11:52.159239 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-28 01:11:52.159249 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-28 01:11:52.159259 | orchestrator | 2026-03-28 01:11:52.159268 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-28 01:11:52.159302 | orchestrator | 2026-03-28 01:11:52.159317 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:11:52.159334 | orchestrator | Saturday 28 March 2026 01:08:32 +0000 (0:00:00.468) 0:00:01.102 ******** 2026-03-28 01:11:52.159349 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:52.159366 | orchestrator | 2026-03-28 01:11:52.159382 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-28 01:11:52.159398 | orchestrator | Saturday 28 March 2026 01:08:33 +0000 (0:00:00.592) 0:00:01.695 ******** 2026-03-28 01:11:52.159414 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-28 01:11:52.159429 | orchestrator | 2026-03-28 01:11:52.159473 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-28 01:11:52.159492 | orchestrator | Saturday 28 March 2026 01:08:37 +0000 (0:00:03.545) 0:00:05.240 ******** 2026-03-28 01:11:52.159508 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-28 01:11:52.159522 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-28 01:11:52.159537 | orchestrator | 2026-03-28 01:11:52.159554 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-28 01:11:52.159617 | orchestrator | Saturday 28 March 2026 01:08:44 +0000 (0:00:07.508) 0:00:12.749 ******** 2026-03-28 01:11:52.159635 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:11:52.159710 | orchestrator | 2026-03-28 01:11:52.159728 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-28 01:11:52.159803 | orchestrator | Saturday 28 March 2026 01:08:47 +0000 (0:00:03.200) 0:00:15.949 ******** 2026-03-28 01:11:52.159814 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:11:52.159824 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-28 01:11:52.159834 | orchestrator | 2026-03-28 01:11:52.159844 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-28 01:11:52.159854 | orchestrator | Saturday 28 March 2026 01:08:51 +0000 (0:00:03.699) 0:00:19.649 ******** 2026-03-28 01:11:52.159864 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:11:52.159880 | orchestrator | 2026-03-28 01:11:52.159896 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-28 01:11:52.159913 | orchestrator | Saturday 28 March 2026 01:08:54 +0000 (0:00:03.138) 0:00:22.788 ******** 2026-03-28 01:11:52.159929 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-28 01:11:52.159946 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-28 01:11:52.159963 | orchestrator | 2026-03-28 01:11:52.159980 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-28 01:11:52.159997 | orchestrator | Saturday 28 March 2026 01:09:02 +0000 (0:00:08.034) 0:00:30.822 ******** 2026-03-28 01:11:52.160018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.160072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.160098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.160109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.160122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.160132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.160149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.160181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.160192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.160203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.160214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.160224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.160241 | orchestrator | 2026-03-28 01:11:52.160251 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:11:52.160261 | orchestrator | Saturday 28 March 2026 01:09:05 +0000 (0:00:02.979) 0:00:33.801 ******** 2026-03-28 01:11:52.160271 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.160280 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:52.160290 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:52.160300 | orchestrator | 2026-03-28 01:11:52.160310 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:11:52.160324 | orchestrator | Saturday 28 March 2026 01:09:05 +0000 (0:00:00.374) 0:00:34.175 ******** 2026-03-28 01:11:52.160334 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:52.160345 | orchestrator | 2026-03-28 01:11:52.160360 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-28 01:11:52.160370 | orchestrator | Saturday 28 March 2026 01:09:06 +0000 (0:00:00.930) 0:00:35.106 ******** 2026-03-28 01:11:52.160380 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-28 01:11:52.160390 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-28 01:11:52.160400 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-28 01:11:52.160410 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-28 01:11:52.160420 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-28 01:11:52.160429 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-28 01:11:52.160500 | orchestrator | 2026-03-28 01:11:52.160512 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-28 01:11:52.160522 | orchestrator | Saturday 28 March 2026 01:09:09 +0000 (0:00:02.976) 0:00:38.082 ******** 2026-03-28 01:11:52.160533 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 01:11:52.160545 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 01:11:52.160556 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 01:11:52.160579 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 01:11:52.160599 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 01:11:52.160610 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 01:11:52.160620 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 01:11:52.160632 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 01:11:52.160648 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 01:11:52.160670 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 01:11:52.160682 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 01:11:52.160692 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 01:11:52.160702 | orchestrator | 2026-03-28 01:11:52.160712 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-28 01:11:52.160722 | orchestrator | Saturday 28 March 2026 01:09:15 +0000 (0:00:05.353) 0:00:43.436 ******** 2026-03-28 01:11:52.160732 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:11:52.160743 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:11:52.160763 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:11:52.160773 | orchestrator | 2026-03-28 01:11:52.160783 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-28 01:11:52.160793 | orchestrator | Saturday 28 March 2026 01:09:17 +0000 (0:00:02.564) 0:00:46.000 ******** 2026-03-28 01:11:52.160802 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-28 01:11:52.160812 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-28 01:11:52.160822 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-28 01:11:52.160832 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:11:52.160841 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:11:52.160851 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:11:52.160861 | orchestrator | 2026-03-28 01:11:52.160870 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-28 01:11:52.160880 | orchestrator | Saturday 28 March 2026 01:09:22 +0000 (0:00:04.350) 0:00:50.350 ******** 2026-03-28 01:11:52.160890 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-28 01:11:52.160900 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-28 01:11:52.160910 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-28 01:11:52.160919 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-28 01:11:52.160929 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-28 01:11:52.160939 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-28 01:11:52.160948 | orchestrator | 2026-03-28 01:11:52.160958 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-28 01:11:52.160968 | orchestrator | Saturday 28 March 2026 01:09:23 +0000 (0:00:01.348) 0:00:51.699 ******** 2026-03-28 01:11:52.160978 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.160987 | orchestrator | 2026-03-28 01:11:52.161008 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-28 01:11:52.161026 | orchestrator | Saturday 28 March 2026 01:09:23 +0000 (0:00:00.172) 0:00:51.871 ******** 2026-03-28 01:11:52.161042 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.161058 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:52.161082 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:52.161099 | orchestrator | 2026-03-28 01:11:52.161114 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:11:52.161130 | orchestrator | Saturday 28 March 2026 01:09:24 +0000 (0:00:00.565) 0:00:52.436 ******** 2026-03-28 01:11:52.161146 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:52.161163 | orchestrator | 2026-03-28 01:11:52.161179 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-28 01:11:52.161197 | orchestrator | Saturday 28 March 2026 01:09:25 +0000 (0:00:00.877) 0:00:53.313 ******** 2026-03-28 01:11:52.161214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.161244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.161256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.161267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.161293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.161304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.161314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.161331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.161342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.161352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.161842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.161866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.161887 | orchestrator | 2026-03-28 01:11:52.161897 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-28 01:11:52.161907 | orchestrator | Saturday 28 March 2026 01:09:30 +0000 (0:00:05.267) 0:00:58.581 ******** 2026-03-28 01:11:52.161918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:11:52.161928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.161938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.161960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.161971 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:52.161981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:11:52.162001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162082 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.162097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:11:52.162114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162151 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:52.162161 | orchestrator | 2026-03-28 01:11:52.162171 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-28 01:11:52.162180 | orchestrator | Saturday 28 March 2026 01:09:31 +0000 (0:00:01.350) 0:00:59.931 ******** 2026-03-28 01:11:52.162191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:11:52.162201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162274 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.162285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:11:52.162295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162371 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:52.162382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:11:52.162400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.162433 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:52.162502 | orchestrator | 2026-03-28 01:11:52.162514 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-28 01:11:52.162525 | orchestrator | Saturday 28 March 2026 01:09:33 +0000 (0:00:01.892) 0:01:01.824 ******** 2026-03-28 01:11:52.162542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.162568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.162580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.162592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162699 | orchestrator | 2026-03-28 01:11:52.162713 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-28 01:11:52.162722 | orchestrator | Saturday 28 March 2026 01:09:39 +0000 (0:00:06.346) 0:01:08.170 ******** 2026-03-28 01:11:52.162732 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-28 01:11:52.162745 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-28 01:11:52.162755 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-28 01:11:52.162763 | orchestrator | 2026-03-28 01:11:52.162772 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-28 01:11:52.162780 | orchestrator | Saturday 28 March 2026 01:09:42 +0000 (0:00:02.397) 0:01:10.568 ******** 2026-03-28 01:11:52.162788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.162797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.162805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.162814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.162920 | orchestrator | 2026-03-28 01:11:52.162928 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-28 01:11:52.162937 | orchestrator | Saturday 28 March 2026 01:10:03 +0000 (0:00:21.387) 0:01:31.956 ******** 2026-03-28 01:11:52.162945 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.162953 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:52.162961 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:52.162969 | orchestrator | 2026-03-28 01:11:52.162977 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-28 01:11:52.162985 | orchestrator | Saturday 28 March 2026 01:10:07 +0000 (0:00:03.376) 0:01:35.332 ******** 2026-03-28 01:11:52.162994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:11:52.163002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.163016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.163032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.163041 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:52.163049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:11:52.163058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.163066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.163079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.163087 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.163103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:11:52.163112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.163120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.163128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.163137 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:52.163145 | orchestrator | 2026-03-28 01:11:52.163165 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-28 01:11:52.163173 | orchestrator | Saturday 28 March 2026 01:10:08 +0000 (0:00:01.502) 0:01:36.835 ******** 2026-03-28 01:11:52.163181 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.163189 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:52.163197 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:52.163205 | orchestrator | 2026-03-28 01:11:52.163213 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-28 01:11:52.163221 | orchestrator | Saturday 28 March 2026 01:10:09 +0000 (0:00:00.641) 0:01:37.476 ******** 2026-03-28 01:11:52.163229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.163247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.163256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:52.163264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.163278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.163286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.163294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.163311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.163320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.163328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.163341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.163350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.163358 | orchestrator | 2026-03-28 01:11:52.163366 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:11:52.163374 | orchestrator | Saturday 28 March 2026 01:10:13 +0000 (0:00:04.154) 0:01:41.630 ******** 2026-03-28 01:11:52.163383 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.163390 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:52.163398 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:52.163406 | orchestrator | 2026-03-28 01:11:52.163414 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-28 01:11:52.163422 | orchestrator | Saturday 28 March 2026 01:10:14 +0000 (0:00:00.612) 0:01:42.243 ******** 2026-03-28 01:11:52.163430 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.163480 | orchestrator | 2026-03-28 01:11:52.163489 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-28 01:11:52.163502 | orchestrator | Saturday 28 March 2026 01:10:16 +0000 (0:00:02.233) 0:01:44.477 ******** 2026-03-28 01:11:52.163511 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.163518 | orchestrator | 2026-03-28 01:11:52.163527 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-28 01:11:52.163539 | orchestrator | Saturday 28 March 2026 01:10:19 +0000 (0:00:02.890) 0:01:47.367 ******** 2026-03-28 01:11:52.163548 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.163556 | orchestrator | 2026-03-28 01:11:52.163564 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-28 01:11:52.163572 | orchestrator | Saturday 28 March 2026 01:10:41 +0000 (0:00:22.121) 0:02:09.489 ******** 2026-03-28 01:11:52.163580 | orchestrator | 2026-03-28 01:11:52.163588 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-28 01:11:52.163596 | orchestrator | Saturday 28 March 2026 01:10:41 +0000 (0:00:00.075) 0:02:09.565 ******** 2026-03-28 01:11:52.163604 | orchestrator | 2026-03-28 01:11:52.163612 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-28 01:11:52.163620 | orchestrator | Saturday 28 March 2026 01:10:41 +0000 (0:00:00.076) 0:02:09.642 ******** 2026-03-28 01:11:52.163628 | orchestrator | 2026-03-28 01:11:52.163635 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-28 01:11:52.163643 | orchestrator | Saturday 28 March 2026 01:10:41 +0000 (0:00:00.072) 0:02:09.714 ******** 2026-03-28 01:11:52.163651 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.163669 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:52.163682 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:52.163696 | orchestrator | 2026-03-28 01:11:52.163707 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-28 01:11:52.163718 | orchestrator | Saturday 28 March 2026 01:11:05 +0000 (0:00:23.851) 0:02:33.565 ******** 2026-03-28 01:11:52.163729 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.163741 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:52.163753 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:52.163764 | orchestrator | 2026-03-28 01:11:52.163775 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-28 01:11:52.163788 | orchestrator | Saturday 28 March 2026 01:11:10 +0000 (0:00:05.093) 0:02:38.659 ******** 2026-03-28 01:11:52.163800 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.163812 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:52.163824 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:52.163837 | orchestrator | 2026-03-28 01:11:52.163848 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-28 01:11:52.163861 | orchestrator | Saturday 28 March 2026 01:11:34 +0000 (0:00:23.665) 0:03:02.325 ******** 2026-03-28 01:11:52.163872 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.163884 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:52.163896 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:52.163908 | orchestrator | 2026-03-28 01:11:52.163921 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-28 01:11:52.163934 | orchestrator | Saturday 28 March 2026 01:11:48 +0000 (0:00:14.617) 0:03:16.942 ******** 2026-03-28 01:11:52.163946 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.163958 | orchestrator | 2026-03-28 01:11:52.163971 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:11:52.163986 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 01:11:52.164000 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:11:52.164012 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:11:52.164023 | orchestrator | 2026-03-28 01:11:52.164034 | orchestrator | 2026-03-28 01:11:52.164044 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:11:52.164053 | orchestrator | Saturday 28 March 2026 01:11:49 +0000 (0:00:00.327) 0:03:17.269 ******** 2026-03-28 01:11:52.164060 | orchestrator | =============================================================================== 2026-03-28 01:11:52.164067 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.85s 2026-03-28 01:11:52.164073 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.67s 2026-03-28 01:11:52.164080 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 22.12s 2026-03-28 01:11:52.164087 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 21.39s 2026-03-28 01:11:52.164093 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 14.62s 2026-03-28 01:11:52.164100 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.03s 2026-03-28 01:11:52.164107 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.51s 2026-03-28 01:11:52.164113 | orchestrator | cinder : Copying over config.json files for services -------------------- 6.35s 2026-03-28 01:11:52.164120 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.35s 2026-03-28 01:11:52.164127 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.27s 2026-03-28 01:11:52.164133 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.09s 2026-03-28 01:11:52.164147 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.35s 2026-03-28 01:11:52.164154 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.15s 2026-03-28 01:11:52.164161 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.70s 2026-03-28 01:11:52.164172 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.55s 2026-03-28 01:11:52.164179 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.38s 2026-03-28 01:11:52.164186 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.20s 2026-03-28 01:11:52.164198 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.14s 2026-03-28 01:11:52.164206 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.98s 2026-03-28 01:11:52.164212 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.98s 2026-03-28 01:11:55.200979 | orchestrator | 2026-03-28 01:11:55 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:55.202231 | orchestrator | 2026-03-28 01:11:55 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:55.204079 | orchestrator | 2026-03-28 01:11:55 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:11:55.204191 | orchestrator | 2026-03-28 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:58.245731 | orchestrator | 2026-03-28 01:11:58 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:11:58.247291 | orchestrator | 2026-03-28 01:11:58 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:11:58.249566 | orchestrator | 2026-03-28 01:11:58 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:11:58.249592 | orchestrator | 2026-03-28 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:01.295506 | orchestrator | 2026-03-28 01:12:01 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:01.298601 | orchestrator | 2026-03-28 01:12:01 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:01.300526 | orchestrator | 2026-03-28 01:12:01 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:01.300577 | orchestrator | 2026-03-28 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:04.337670 | orchestrator | 2026-03-28 01:12:04 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:04.338079 | orchestrator | 2026-03-28 01:12:04 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:04.339851 | orchestrator | 2026-03-28 01:12:04 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:04.339892 | orchestrator | 2026-03-28 01:12:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:07.381145 | orchestrator | 2026-03-28 01:12:07 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:07.381796 | orchestrator | 2026-03-28 01:12:07 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:07.382413 | orchestrator | 2026-03-28 01:12:07 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:07.382467 | orchestrator | 2026-03-28 01:12:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:10.425640 | orchestrator | 2026-03-28 01:12:10 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:10.427542 | orchestrator | 2026-03-28 01:12:10 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:10.429045 | orchestrator | 2026-03-28 01:12:10 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:10.429087 | orchestrator | 2026-03-28 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:13.476260 | orchestrator | 2026-03-28 01:12:13 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:13.477644 | orchestrator | 2026-03-28 01:12:13 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:13.478632 | orchestrator | 2026-03-28 01:12:13 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:13.478675 | orchestrator | 2026-03-28 01:12:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:16.544087 | orchestrator | 2026-03-28 01:12:16 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:16.544756 | orchestrator | 2026-03-28 01:12:16 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:16.545663 | orchestrator | 2026-03-28 01:12:16 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:16.546560 | orchestrator | 2026-03-28 01:12:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:19.594363 | orchestrator | 2026-03-28 01:12:19 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:19.595621 | orchestrator | 2026-03-28 01:12:19 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:19.597775 | orchestrator | 2026-03-28 01:12:19 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:19.597827 | orchestrator | 2026-03-28 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:22.639539 | orchestrator | 2026-03-28 01:12:22 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:22.640374 | orchestrator | 2026-03-28 01:12:22 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:22.642559 | orchestrator | 2026-03-28 01:12:22 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:22.643055 | orchestrator | 2026-03-28 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:25.678227 | orchestrator | 2026-03-28 01:12:25 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:25.678324 | orchestrator | 2026-03-28 01:12:25 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:25.679395 | orchestrator | 2026-03-28 01:12:25 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:25.679474 | orchestrator | 2026-03-28 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:28.714569 | orchestrator | 2026-03-28 01:12:28 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:28.715700 | orchestrator | 2026-03-28 01:12:28 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:28.717368 | orchestrator | 2026-03-28 01:12:28 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:28.717716 | orchestrator | 2026-03-28 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:31.753822 | orchestrator | 2026-03-28 01:12:31 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:31.754481 | orchestrator | 2026-03-28 01:12:31 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:31.755769 | orchestrator | 2026-03-28 01:12:31 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:31.755814 | orchestrator | 2026-03-28 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:34.788509 | orchestrator | 2026-03-28 01:12:34 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:34.789683 | orchestrator | 2026-03-28 01:12:34 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:34.790300 | orchestrator | 2026-03-28 01:12:34 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:34.790334 | orchestrator | 2026-03-28 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:37.827601 | orchestrator | 2026-03-28 01:12:37 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:37.829594 | orchestrator | 2026-03-28 01:12:37 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:37.830744 | orchestrator | 2026-03-28 01:12:37 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:37.830794 | orchestrator | 2026-03-28 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:40.873681 | orchestrator | 2026-03-28 01:12:40 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:40.875255 | orchestrator | 2026-03-28 01:12:40 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:40.878336 | orchestrator | 2026-03-28 01:12:40 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:40.878439 | orchestrator | 2026-03-28 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:43.919005 | orchestrator | 2026-03-28 01:12:43 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:43.920123 | orchestrator | 2026-03-28 01:12:43 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:43.921242 | orchestrator | 2026-03-28 01:12:43 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:43.922687 | orchestrator | 2026-03-28 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:46.958066 | orchestrator | 2026-03-28 01:12:46 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:46.958627 | orchestrator | 2026-03-28 01:12:46 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:46.961783 | orchestrator | 2026-03-28 01:12:46 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:46.961818 | orchestrator | 2026-03-28 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:50.005470 | orchestrator | 2026-03-28 01:12:50 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:50.008442 | orchestrator | 2026-03-28 01:12:50 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:50.011413 | orchestrator | 2026-03-28 01:12:50 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:50.011478 | orchestrator | 2026-03-28 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:53.052095 | orchestrator | 2026-03-28 01:12:53 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:53.053647 | orchestrator | 2026-03-28 01:12:53 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:53.055097 | orchestrator | 2026-03-28 01:12:53 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:53.055178 | orchestrator | 2026-03-28 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:56.092220 | orchestrator | 2026-03-28 01:12:56 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state STARTED 2026-03-28 01:12:56.094477 | orchestrator | 2026-03-28 01:12:56 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:12:56.095674 | orchestrator | 2026-03-28 01:12:56 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state STARTED 2026-03-28 01:12:56.095716 | orchestrator | 2026-03-28 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:59.237310 | orchestrator | 2026-03-28 01:14:59 | INFO  | Task f808a19d-fb2f-48aa-8b4e-86a0dbb1cf58 is in state SUCCESS 2026-03-28 01:14:59.237425 | orchestrator | 2026-03-28 01:14:59 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:14:59.239513 | orchestrator | 2026-03-28 01:14:59 | INFO  | Task 62b07990-b1b1-4a0b-a59b-70f39556b067 is in state SUCCESS 2026-03-28 01:14:59.241592 | orchestrator | 2026-03-28 01:14:59.241634 | orchestrator | 2026-03-28 01:14:59.241640 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:14:59.241646 | orchestrator | 2026-03-28 01:14:59.241650 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:14:59.241655 | orchestrator | Saturday 28 March 2026 01:11:20 +0000 (0:00:00.251) 0:00:00.251 ******** 2026-03-28 01:14:59.241660 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:59.241665 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:14:59.241669 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:14:59.241673 | orchestrator | 2026-03-28 01:14:59.241678 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:14:59.241682 | orchestrator | Saturday 28 March 2026 01:11:20 +0000 (0:00:00.424) 0:00:00.675 ******** 2026-03-28 01:14:59.241686 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-28 01:14:59.241691 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-28 01:14:59.241695 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-28 01:14:59.241699 | orchestrator | 2026-03-28 01:14:59.241703 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-28 01:14:59.241707 | orchestrator | 2026-03-28 01:14:59.241711 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-28 01:14:59.241715 | orchestrator | Saturday 28 March 2026 01:11:21 +0000 (0:00:00.890) 0:00:01.566 ******** 2026-03-28 01:14:59.241719 | orchestrator | 2026-03-28 01:14:59.241723 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-28 01:14:59.241727 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:59.241731 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:14:59.241735 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:14:59.241739 | orchestrator | 2026-03-28 01:14:59.241743 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:14:59.241748 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:14:59.241753 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:14:59.241757 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:14:59.241761 | orchestrator | 2026-03-28 01:14:59.241765 | orchestrator | 2026-03-28 01:14:59.241770 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:14:59.241774 | orchestrator | Saturday 28 March 2026 01:14:12 +0000 (0:02:51.044) 0:02:52.610 ******** 2026-03-28 01:14:59.241778 | orchestrator | =============================================================================== 2026-03-28 01:14:59.241782 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 171.04s 2026-03-28 01:14:59.241837 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2026-03-28 01:14:59.241843 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2026-03-28 01:14:59.241848 | orchestrator | 2026-03-28 01:14:59.241852 | orchestrator | 2026-03-28 01:14:59.241856 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:14:59.241860 | orchestrator | 2026-03-28 01:14:59.241864 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:14:59.241868 | orchestrator | Saturday 28 March 2026 01:11:52 +0000 (0:00:00.288) 0:00:00.288 ******** 2026-03-28 01:14:59.241872 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:59.241893 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:14:59.241898 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:14:59.241902 | orchestrator | 2026-03-28 01:14:59.241906 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:14:59.241910 | orchestrator | Saturday 28 March 2026 01:11:52 +0000 (0:00:00.313) 0:00:00.602 ******** 2026-03-28 01:14:59.241914 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-28 01:14:59.241918 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-28 01:14:59.241922 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-28 01:14:59.241926 | orchestrator | 2026-03-28 01:14:59.241930 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-28 01:14:59.241935 | orchestrator | 2026-03-28 01:14:59.241938 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-28 01:14:59.241943 | orchestrator | Saturday 28 March 2026 01:11:52 +0000 (0:00:00.480) 0:00:01.082 ******** 2026-03-28 01:14:59.241946 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:14:59.241951 | orchestrator | 2026-03-28 01:14:59.241989 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-28 01:14:59.241994 | orchestrator | Saturday 28 March 2026 01:11:53 +0000 (0:00:00.576) 0:00:01.659 ******** 2026-03-28 01:14:59.242001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.242083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.242097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.242108 | orchestrator | 2026-03-28 01:14:59.242112 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-28 01:14:59.242116 | orchestrator | Saturday 28 March 2026 01:11:54 +0000 (0:00:00.749) 0:00:02.408 ******** 2026-03-28 01:14:59.242125 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-28 01:14:59.242130 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-28 01:14:59.242135 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:14:59.242139 | orchestrator | 2026-03-28 01:14:59.242143 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-28 01:14:59.242147 | orchestrator | Saturday 28 March 2026 01:11:55 +0000 (0:00:01.030) 0:00:03.439 ******** 2026-03-28 01:14:59.242151 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:14:59.242155 | orchestrator | 2026-03-28 01:14:59.242163 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-28 01:14:59.242167 | orchestrator | Saturday 28 March 2026 01:11:56 +0000 (0:00:00.869) 0:00:04.308 ******** 2026-03-28 01:14:59.242171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.242175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.242183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.242188 | orchestrator | 2026-03-28 01:14:59.242226 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-28 01:14:59.242230 | orchestrator | Saturday 28 March 2026 01:11:57 +0000 (0:00:01.630) 0:00:05.938 ******** 2026-03-28 01:14:59.242235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 01:14:59.242244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 01:14:59.242249 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:59.242254 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:59.242263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 01:14:59.242290 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:59.242296 | orchestrator | 2026-03-28 01:14:59.242300 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-28 01:14:59.242305 | orchestrator | Saturday 28 March 2026 01:11:58 +0000 (0:00:00.487) 0:00:06.426 ******** 2026-03-28 01:14:59.242310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 01:14:59.242315 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:59.242319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 01:14:59.242325 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:59.242332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 01:14:59.242341 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:59.242346 | orchestrator | 2026-03-28 01:14:59.242351 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-28 01:14:59.242356 | orchestrator | Saturday 28 March 2026 01:11:59 +0000 (0:00:00.944) 0:00:07.370 ******** 2026-03-28 01:14:59.242360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.242368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.242373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.242379 | orchestrator | 2026-03-28 01:14:59.242383 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-28 01:14:59.242388 | orchestrator | Saturday 28 March 2026 01:12:00 +0000 (0:00:01.477) 0:00:08.848 ******** 2026-03-28 01:14:59.242393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.242403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.242411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.242416 | orchestrator | 2026-03-28 01:14:59.242421 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-28 01:14:59.242426 | orchestrator | Saturday 28 March 2026 01:12:02 +0000 (0:00:01.414) 0:00:10.263 ******** 2026-03-28 01:14:59.242431 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:59.242435 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:59.242440 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:59.242444 | orchestrator | 2026-03-28 01:14:59.242449 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-28 01:14:59.242454 | orchestrator | Saturday 28 March 2026 01:12:02 +0000 (0:00:00.556) 0:00:10.820 ******** 2026-03-28 01:14:59.242458 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-28 01:14:59.242463 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-28 01:14:59.242468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-28 01:14:59.242472 | orchestrator | 2026-03-28 01:14:59.242477 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-28 01:14:59.242481 | orchestrator | Saturday 28 March 2026 01:12:03 +0000 (0:00:01.376) 0:00:12.197 ******** 2026-03-28 01:14:59.242488 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-28 01:14:59.242493 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-28 01:14:59.242498 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-28 01:14:59.242502 | orchestrator | 2026-03-28 01:14:59.242507 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-28 01:14:59.242512 | orchestrator | Saturday 28 March 2026 01:12:05 +0000 (0:00:01.389) 0:00:13.586 ******** 2026-03-28 01:14:59.242516 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:14:59.242521 | orchestrator | 2026-03-28 01:14:59.242525 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-28 01:14:59.242530 | orchestrator | Saturday 28 March 2026 01:12:06 +0000 (0:00:00.889) 0:00:14.476 ******** 2026-03-28 01:14:59.242535 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-28 01:14:59.242540 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-28 01:14:59.242544 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:59.242549 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:14:59.242554 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:14:59.242558 | orchestrator | 2026-03-28 01:14:59.242562 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-28 01:14:59.242575 | orchestrator | Saturday 28 March 2026 01:12:07 +0000 (0:00:00.770) 0:00:15.246 ******** 2026-03-28 01:14:59.242579 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:59.242583 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:59.242587 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:59.242591 | orchestrator | 2026-03-28 01:14:59.242595 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-28 01:14:59.242599 | orchestrator | Saturday 28 March 2026 01:12:07 +0000 (0:00:00.626) 0:00:15.873 ******** 2026-03-28 01:14:59.242603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1312143, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1360526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1312143, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1360526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1312143, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1360526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1312202, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.150622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1312202, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.150622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1312202, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.150622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1312156, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1396954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1312156, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1396954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1312156, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1396954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1312203, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1521795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1312203, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1521795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1312203, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1521795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1312177, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1434164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1312177, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1434164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1312177, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1434164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1312186, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1479673, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1312186, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1479673, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1312186, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1479673, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1312140, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1343932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1312140, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1343932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1312140, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1343932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1312148, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1376123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1312148, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1376123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1312148, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1376123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1312161, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1401956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1312161, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1401956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1312161, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1401956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1312181, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1448944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1312181, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1448944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1312181, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1448944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1312198, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1503196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1312198, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1503196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1312198, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1503196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1312152, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1376183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1312152, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1376183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1312152, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1376183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1312185, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.145978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1312185, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.145978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1312185, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.145978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1312179, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1438165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1312179, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1438165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1312179, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1438165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1312172, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1433008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1312172, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1433008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1312172, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1433008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1312167, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1422288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1312167, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1422288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1312167, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1422288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1312182, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1453211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1312182, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1453211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1312182, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1453211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1312163, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1408613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1312163, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1408613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1312163, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1408613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1312197, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1492362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1312197, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1492362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1312197, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1492362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1312321, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.184848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1312321, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.184848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.242996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1312321, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.184848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1312230, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1660976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1312230, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1660976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1312230, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1660976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1312219, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1567838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1312219, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1567838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1312219, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1567838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1312254, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1693184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1312254, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1693184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1312254, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1693184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1312209, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1529665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1312209, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1529665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1312209, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1529665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1312289, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.178982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1312289, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.178982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1312289, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.178982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1312256, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1760707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1312256, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1760707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1312256, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1760707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1312295, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1791706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1312295, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1791706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1312295, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1791706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1312318, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1837091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1312318, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1837091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1312318, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1837091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1312284, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1780648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1312284, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1780648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1312284, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1780648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1312248, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1675615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1312248, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1675615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1312248, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1675615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1312226, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1605198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1312226, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1605198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1312226, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1605198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1312244, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1666243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1312244, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1666243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1312244, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1666243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1312221, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1594112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1312221, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1594112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1312221, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1594112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1312251, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1691444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1312251, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1691444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1312251, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1691444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1312307, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.183078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1312307, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.183078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1312307, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.183078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1312301, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1813953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1312301, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1813953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1312301, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1813953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1312210, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1536255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1312210, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1536255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1312210, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1536255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1312212, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.156635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1312212, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.156635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1312212, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.156635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1312278, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1770732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1312278, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1770732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1312278, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1770732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1312300, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1797502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1312300, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1797502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1312300, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774657145.1797502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:59.243387 | orchestrator | 2026-03-28 01:14:59.243391 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-28 01:14:59.243395 | orchestrator | Saturday 28 March 2026 01:12:45 +0000 (0:00:37.789) 0:00:53.663 ******** 2026-03-28 01:14:59.243402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.243407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.243411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:59.243415 | orchestrator | 2026-03-28 01:14:59.243419 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-28 01:14:59.243426 | orchestrator | Saturday 28 March 2026 01:12:46 +0000 (0:00:01.075) 0:00:54.739 ******** 2026-03-28 01:14:59.243430 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:59.243434 | orchestrator | 2026-03-28 01:14:59.243438 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-28 01:14:59.243442 | orchestrator | Saturday 28 March 2026 01:12:49 +0000 (0:00:02.533) 0:00:57.272 ******** 2026-03-28 01:14:59.243446 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:59.243450 | orchestrator | 2026-03-28 01:14:59.243454 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-28 01:14:59.243458 | orchestrator | Saturday 28 March 2026 01:12:51 +0000 (0:00:02.352) 0:00:59.624 ******** 2026-03-28 01:14:59.243462 | orchestrator | 2026-03-28 01:14:59.243466 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-28 01:14:59.243470 | orchestrator | Saturday 28 March 2026 01:12:51 +0000 (0:00:00.065) 0:00:59.690 ******** 2026-03-28 01:14:59.243474 | orchestrator | 2026-03-28 01:14:59.243478 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-28 01:14:59.243482 | orchestrator | Saturday 28 March 2026 01:12:51 +0000 (0:00:00.086) 0:00:59.776 ******** 2026-03-28 01:14:59.243489 | orchestrator | 2026-03-28 01:14:59.243493 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-28 01:14:59.243497 | orchestrator | Saturday 28 March 2026 01:12:51 +0000 (0:00:00.265) 0:01:00.042 ******** 2026-03-28 01:14:59.243501 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:59.243505 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:59.243509 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:59.243513 | orchestrator | 2026-03-28 01:14:59.243517 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-28 01:14:59.243521 | orchestrator | Saturday 28 March 2026 01:12:58 +0000 (0:00:06.974) 0:01:07.017 ******** 2026-03-28 01:14:59.243525 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:59.243529 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:59.243533 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-28 01:14:59.243537 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-28 01:14:59.243541 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-28 01:14:59.243545 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-03-28 01:14:59.243549 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:59.243553 | orchestrator | 2026-03-28 01:14:59.243557 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-28 01:14:59.243561 | orchestrator | Saturday 28 March 2026 01:13:50 +0000 (0:00:51.479) 0:01:58.496 ******** 2026-03-28 01:14:59.243565 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:59.243569 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:59.243572 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:59.243576 | orchestrator | 2026-03-28 01:14:59.243580 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-28 01:14:59.243587 | orchestrator | Saturday 28 March 2026 01:14:16 +0000 (0:00:26.699) 0:02:25.196 ******** 2026-03-28 01:14:59.243592 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:59.243596 | orchestrator | 2026-03-28 01:14:59.243600 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-28 01:14:59.243604 | orchestrator | Saturday 28 March 2026 01:14:19 +0000 (0:00:02.427) 0:02:27.624 ******** 2026-03-28 01:14:59.243608 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:59.243612 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:59.243616 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:59.243620 | orchestrator | 2026-03-28 01:14:59.243624 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-28 01:14:59.243628 | orchestrator | Saturday 28 March 2026 01:14:19 +0000 (0:00:00.550) 0:02:28.174 ******** 2026-03-28 01:14:59.243632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-28 01:14:59.243637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-28 01:14:59.243641 | orchestrator | 2026-03-28 01:14:59.243645 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-28 01:14:59.243649 | orchestrator | Saturday 28 March 2026 01:14:22 +0000 (0:00:02.626) 0:02:30.801 ******** 2026-03-28 01:14:59.243653 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:59.243657 | orchestrator | 2026-03-28 01:14:59.243661 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:14:59.243668 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:14:59.243673 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:14:59.243677 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:14:59.243681 | orchestrator | 2026-03-28 01:14:59.243685 | orchestrator | 2026-03-28 01:14:59.243691 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:14:59.243695 | orchestrator | Saturday 28 March 2026 01:14:22 +0000 (0:00:00.303) 0:02:31.104 ******** 2026-03-28 01:14:59.243699 | orchestrator | =============================================================================== 2026-03-28 01:14:59.243703 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.48s 2026-03-28 01:14:59.243707 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.79s 2026-03-28 01:14:59.243711 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 26.70s 2026-03-28 01:14:59.243715 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.97s 2026-03-28 01:14:59.243719 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.63s 2026-03-28 01:14:59.243723 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.53s 2026-03-28 01:14:59.243727 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.43s 2026-03-28 01:14:59.243731 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.35s 2026-03-28 01:14:59.243734 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.63s 2026-03-28 01:14:59.243738 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.48s 2026-03-28 01:14:59.243742 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.42s 2026-03-28 01:14:59.243746 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.39s 2026-03-28 01:14:59.243750 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.38s 2026-03-28 01:14:59.243754 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.08s 2026-03-28 01:14:59.243758 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.03s 2026-03-28 01:14:59.243762 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.94s 2026-03-28 01:14:59.243766 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.89s 2026-03-28 01:14:59.243770 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.87s 2026-03-28 01:14:59.243773 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.77s 2026-03-28 01:14:59.243777 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.75s 2026-03-28 01:14:59.243781 | orchestrator | 2026-03-28 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:02.278073 | orchestrator | 2026-03-28 01:15:02 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:02.278174 | orchestrator | 2026-03-28 01:15:02 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:02.278338 | orchestrator | 2026-03-28 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:05.317304 | orchestrator | 2026-03-28 01:15:05 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:05.318605 | orchestrator | 2026-03-28 01:15:05 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:05.318772 | orchestrator | 2026-03-28 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:08.349122 | orchestrator | 2026-03-28 01:15:08 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:08.350328 | orchestrator | 2026-03-28 01:15:08 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:08.350390 | orchestrator | 2026-03-28 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:11.392063 | orchestrator | 2026-03-28 01:15:11 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:11.395761 | orchestrator | 2026-03-28 01:15:11 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:11.395836 | orchestrator | 2026-03-28 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:14.429386 | orchestrator | 2026-03-28 01:15:14 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:14.429595 | orchestrator | 2026-03-28 01:15:14 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:14.429726 | orchestrator | 2026-03-28 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:17.465256 | orchestrator | 2026-03-28 01:15:17 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:17.466677 | orchestrator | 2026-03-28 01:15:17 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:17.466722 | orchestrator | 2026-03-28 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:20.501035 | orchestrator | 2026-03-28 01:15:20 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:20.501156 | orchestrator | 2026-03-28 01:15:20 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:20.501223 | orchestrator | 2026-03-28 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:23.543852 | orchestrator | 2026-03-28 01:15:23 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:23.545927 | orchestrator | 2026-03-28 01:15:23 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:23.546242 | orchestrator | 2026-03-28 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:26.584105 | orchestrator | 2026-03-28 01:15:26 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:26.584449 | orchestrator | 2026-03-28 01:15:26 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:26.585246 | orchestrator | 2026-03-28 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:29.633538 | orchestrator | 2026-03-28 01:15:29 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:29.636124 | orchestrator | 2026-03-28 01:15:29 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:29.636459 | orchestrator | 2026-03-28 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:32.679937 | orchestrator | 2026-03-28 01:15:32 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:32.683441 | orchestrator | 2026-03-28 01:15:32 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:32.683512 | orchestrator | 2026-03-28 01:15:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:35.725568 | orchestrator | 2026-03-28 01:15:35 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:35.727464 | orchestrator | 2026-03-28 01:15:35 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:35.727603 | orchestrator | 2026-03-28 01:15:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:38.763490 | orchestrator | 2026-03-28 01:15:38 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:38.763579 | orchestrator | 2026-03-28 01:15:38 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:38.763589 | orchestrator | 2026-03-28 01:15:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:41.807232 | orchestrator | 2026-03-28 01:15:41 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:41.809380 | orchestrator | 2026-03-28 01:15:41 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:41.809448 | orchestrator | 2026-03-28 01:15:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:44.859552 | orchestrator | 2026-03-28 01:15:44 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:44.861647 | orchestrator | 2026-03-28 01:15:44 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:44.861775 | orchestrator | 2026-03-28 01:15:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:47.906895 | orchestrator | 2026-03-28 01:15:47 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:47.909751 | orchestrator | 2026-03-28 01:15:47 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:47.909804 | orchestrator | 2026-03-28 01:15:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:50.953463 | orchestrator | 2026-03-28 01:15:50 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:50.954681 | orchestrator | 2026-03-28 01:15:50 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:50.954704 | orchestrator | 2026-03-28 01:15:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:54.002331 | orchestrator | 2026-03-28 01:15:54 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:54.009447 | orchestrator | 2026-03-28 01:15:54 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:54.009527 | orchestrator | 2026-03-28 01:15:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:57.055450 | orchestrator | 2026-03-28 01:15:57 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:15:57.056836 | orchestrator | 2026-03-28 01:15:57 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:15:57.056943 | orchestrator | 2026-03-28 01:15:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:00.101082 | orchestrator | 2026-03-28 01:16:00 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:00.102626 | orchestrator | 2026-03-28 01:16:00 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:00.102721 | orchestrator | 2026-03-28 01:16:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:03.148994 | orchestrator | 2026-03-28 01:16:03 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:03.149421 | orchestrator | 2026-03-28 01:16:03 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:03.149721 | orchestrator | 2026-03-28 01:16:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:06.190886 | orchestrator | 2026-03-28 01:16:06 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:06.192947 | orchestrator | 2026-03-28 01:16:06 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:06.193037 | orchestrator | 2026-03-28 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:09.233854 | orchestrator | 2026-03-28 01:16:09 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:09.233947 | orchestrator | 2026-03-28 01:16:09 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:09.233957 | orchestrator | 2026-03-28 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:12.272488 | orchestrator | 2026-03-28 01:16:12 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:12.275159 | orchestrator | 2026-03-28 01:16:12 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:12.275887 | orchestrator | 2026-03-28 01:16:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:15.319728 | orchestrator | 2026-03-28 01:16:15 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:15.321587 | orchestrator | 2026-03-28 01:16:15 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:15.321653 | orchestrator | 2026-03-28 01:16:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:18.364433 | orchestrator | 2026-03-28 01:16:18 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:18.365908 | orchestrator | 2026-03-28 01:16:18 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:18.365961 | orchestrator | 2026-03-28 01:16:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:21.409556 | orchestrator | 2026-03-28 01:16:21 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:21.411009 | orchestrator | 2026-03-28 01:16:21 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:21.411136 | orchestrator | 2026-03-28 01:16:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:24.457025 | orchestrator | 2026-03-28 01:16:24 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:24.458266 | orchestrator | 2026-03-28 01:16:24 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:24.458447 | orchestrator | 2026-03-28 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:27.508839 | orchestrator | 2026-03-28 01:16:27 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:27.510524 | orchestrator | 2026-03-28 01:16:27 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:27.510731 | orchestrator | 2026-03-28 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:30.545953 | orchestrator | 2026-03-28 01:16:30 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:30.548175 | orchestrator | 2026-03-28 01:16:30 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:30.548215 | orchestrator | 2026-03-28 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:33.597790 | orchestrator | 2026-03-28 01:16:33 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:33.599657 | orchestrator | 2026-03-28 01:16:33 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:33.599737 | orchestrator | 2026-03-28 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:36.648729 | orchestrator | 2026-03-28 01:16:36 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:36.651736 | orchestrator | 2026-03-28 01:16:36 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:36.651822 | orchestrator | 2026-03-28 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:39.697616 | orchestrator | 2026-03-28 01:16:39 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:39.699885 | orchestrator | 2026-03-28 01:16:39 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:39.699934 | orchestrator | 2026-03-28 01:16:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:42.770465 | orchestrator | 2026-03-28 01:16:42 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:42.771043 | orchestrator | 2026-03-28 01:16:42 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:42.771112 | orchestrator | 2026-03-28 01:16:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:45.816524 | orchestrator | 2026-03-28 01:16:45 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:45.819108 | orchestrator | 2026-03-28 01:16:45 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:45.819177 | orchestrator | 2026-03-28 01:16:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:48.866432 | orchestrator | 2026-03-28 01:16:48 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:48.866739 | orchestrator | 2026-03-28 01:16:48 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:48.866772 | orchestrator | 2026-03-28 01:16:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:51.904865 | orchestrator | 2026-03-28 01:16:51 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:51.907164 | orchestrator | 2026-03-28 01:16:51 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:51.907217 | orchestrator | 2026-03-28 01:16:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:54.945859 | orchestrator | 2026-03-28 01:16:54 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:54.946782 | orchestrator | 2026-03-28 01:16:54 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:54.946840 | orchestrator | 2026-03-28 01:16:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:57.994530 | orchestrator | 2026-03-28 01:16:57 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:16:57.996025 | orchestrator | 2026-03-28 01:16:57 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:16:57.996165 | orchestrator | 2026-03-28 01:16:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:01.054890 | orchestrator | 2026-03-28 01:17:01 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:01.056673 | orchestrator | 2026-03-28 01:17:01 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:01.057155 | orchestrator | 2026-03-28 01:17:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:04.108974 | orchestrator | 2026-03-28 01:17:04 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:04.111262 | orchestrator | 2026-03-28 01:17:04 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:04.111342 | orchestrator | 2026-03-28 01:17:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:07.161018 | orchestrator | 2026-03-28 01:17:07 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:07.161906 | orchestrator | 2026-03-28 01:17:07 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:07.161951 | orchestrator | 2026-03-28 01:17:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:10.202384 | orchestrator | 2026-03-28 01:17:10 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:10.203218 | orchestrator | 2026-03-28 01:17:10 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:10.203274 | orchestrator | 2026-03-28 01:17:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:13.235938 | orchestrator | 2026-03-28 01:17:13 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:13.236250 | orchestrator | 2026-03-28 01:17:13 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:13.236274 | orchestrator | 2026-03-28 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:16.272564 | orchestrator | 2026-03-28 01:17:16 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:16.272655 | orchestrator | 2026-03-28 01:17:16 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:16.272668 | orchestrator | 2026-03-28 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:19.340440 | orchestrator | 2026-03-28 01:17:19 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:19.340792 | orchestrator | 2026-03-28 01:17:19 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:19.340819 | orchestrator | 2026-03-28 01:17:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:22.395188 | orchestrator | 2026-03-28 01:17:22 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:22.395866 | orchestrator | 2026-03-28 01:17:22 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:22.395964 | orchestrator | 2026-03-28 01:17:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:25.448289 | orchestrator | 2026-03-28 01:17:25 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:25.448932 | orchestrator | 2026-03-28 01:17:25 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:25.449115 | orchestrator | 2026-03-28 01:17:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:28.505417 | orchestrator | 2026-03-28 01:17:28 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:28.505791 | orchestrator | 2026-03-28 01:17:28 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:28.506202 | orchestrator | 2026-03-28 01:17:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:31.549934 | orchestrator | 2026-03-28 01:17:31 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:31.552857 | orchestrator | 2026-03-28 01:17:31 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:31.552944 | orchestrator | 2026-03-28 01:17:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:34.601732 | orchestrator | 2026-03-28 01:17:34 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:34.601863 | orchestrator | 2026-03-28 01:17:34 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:34.601880 | orchestrator | 2026-03-28 01:17:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:37.644238 | orchestrator | 2026-03-28 01:17:37 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:37.646390 | orchestrator | 2026-03-28 01:17:37 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:37.646459 | orchestrator | 2026-03-28 01:17:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:40.679256 | orchestrator | 2026-03-28 01:17:40 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:40.679623 | orchestrator | 2026-03-28 01:17:40 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:40.679811 | orchestrator | 2026-03-28 01:17:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:43.728765 | orchestrator | 2026-03-28 01:17:43 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:43.730513 | orchestrator | 2026-03-28 01:17:43 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:43.730576 | orchestrator | 2026-03-28 01:17:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:46.843257 | orchestrator | 2026-03-28 01:17:46 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:46.846892 | orchestrator | 2026-03-28 01:17:46 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:46.847620 | orchestrator | 2026-03-28 01:17:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:49.893450 | orchestrator | 2026-03-28 01:17:49 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:49.893844 | orchestrator | 2026-03-28 01:17:49 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:49.893886 | orchestrator | 2026-03-28 01:17:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:52.935417 | orchestrator | 2026-03-28 01:17:52 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:52.937840 | orchestrator | 2026-03-28 01:17:52 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:52.937952 | orchestrator | 2026-03-28 01:17:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:55.981435 | orchestrator | 2026-03-28 01:17:55 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:55.982501 | orchestrator | 2026-03-28 01:17:55 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:55.982934 | orchestrator | 2026-03-28 01:17:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:59.025948 | orchestrator | 2026-03-28 01:17:59 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:17:59.026097 | orchestrator | 2026-03-28 01:17:59 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:17:59.026110 | orchestrator | 2026-03-28 01:17:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:02.066305 | orchestrator | 2026-03-28 01:18:02 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:02.067391 | orchestrator | 2026-03-28 01:18:02 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:02.067425 | orchestrator | 2026-03-28 01:18:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:05.101060 | orchestrator | 2026-03-28 01:18:05 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:05.101415 | orchestrator | 2026-03-28 01:18:05 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:05.101506 | orchestrator | 2026-03-28 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:08.135396 | orchestrator | 2026-03-28 01:18:08 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:08.135851 | orchestrator | 2026-03-28 01:18:08 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:08.136006 | orchestrator | 2026-03-28 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:11.177280 | orchestrator | 2026-03-28 01:18:11 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:11.179742 | orchestrator | 2026-03-28 01:18:11 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:11.179804 | orchestrator | 2026-03-28 01:18:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:14.223580 | orchestrator | 2026-03-28 01:18:14 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:14.223703 | orchestrator | 2026-03-28 01:18:14 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:14.223721 | orchestrator | 2026-03-28 01:18:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:17.265808 | orchestrator | 2026-03-28 01:18:17 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:17.267639 | orchestrator | 2026-03-28 01:18:17 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:17.267685 | orchestrator | 2026-03-28 01:18:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:20.318096 | orchestrator | 2026-03-28 01:18:20 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:20.318968 | orchestrator | 2026-03-28 01:18:20 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:20.319007 | orchestrator | 2026-03-28 01:18:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:23.364722 | orchestrator | 2026-03-28 01:18:23 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:23.366223 | orchestrator | 2026-03-28 01:18:23 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:23.366265 | orchestrator | 2026-03-28 01:18:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:26.418570 | orchestrator | 2026-03-28 01:18:26 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:26.418871 | orchestrator | 2026-03-28 01:18:26 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:26.418903 | orchestrator | 2026-03-28 01:18:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:29.471576 | orchestrator | 2026-03-28 01:18:29 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:29.471734 | orchestrator | 2026-03-28 01:18:29 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:29.471759 | orchestrator | 2026-03-28 01:18:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:32.512587 | orchestrator | 2026-03-28 01:18:32 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:32.512826 | orchestrator | 2026-03-28 01:18:32 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:32.513621 | orchestrator | 2026-03-28 01:18:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:35.563414 | orchestrator | 2026-03-28 01:18:35 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:35.564775 | orchestrator | 2026-03-28 01:18:35 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:35.564859 | orchestrator | 2026-03-28 01:18:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:38.613432 | orchestrator | 2026-03-28 01:18:38 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:38.614622 | orchestrator | 2026-03-28 01:18:38 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:38.614680 | orchestrator | 2026-03-28 01:18:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:41.657279 | orchestrator | 2026-03-28 01:18:41 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:41.657671 | orchestrator | 2026-03-28 01:18:41 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:41.657705 | orchestrator | 2026-03-28 01:18:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:44.705337 | orchestrator | 2026-03-28 01:18:44 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:44.706076 | orchestrator | 2026-03-28 01:18:44 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:44.706115 | orchestrator | 2026-03-28 01:18:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:47.736791 | orchestrator | 2026-03-28 01:18:47 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:47.738955 | orchestrator | 2026-03-28 01:18:47 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:47.739001 | orchestrator | 2026-03-28 01:18:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:50.782541 | orchestrator | 2026-03-28 01:18:50 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state STARTED 2026-03-28 01:18:50.782642 | orchestrator | 2026-03-28 01:18:50 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:50.782661 | orchestrator | 2026-03-28 01:18:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:53.831020 | orchestrator | 2026-03-28 01:18:53 | INFO  | Task caac72bc-d1ab-48d9-a88d-39b485338843 is in state SUCCESS 2026-03-28 01:18:53.833547 | orchestrator | 2026-03-28 01:18:53.833614 | orchestrator | 2026-03-28 01:18:53.833631 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:18:53.833646 | orchestrator | 2026-03-28 01:18:53.833662 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-28 01:18:53.833677 | orchestrator | Saturday 28 March 2026 01:09:36 +0000 (0:00:00.548) 0:00:00.548 ******** 2026-03-28 01:18:53.833752 | orchestrator | changed: [testbed-manager] 2026-03-28 01:18:53.833765 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.833775 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:53.833783 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:53.833792 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:18:53.833849 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:18:53.833858 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:18:53.833867 | orchestrator | 2026-03-28 01:18:53.833876 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:18:53.834106 | orchestrator | Saturday 28 March 2026 01:09:38 +0000 (0:00:02.014) 0:00:02.563 ******** 2026-03-28 01:18:53.834124 | orchestrator | changed: [testbed-manager] 2026-03-28 01:18:53.834135 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.834169 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:53.834179 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:53.834189 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:18:53.834199 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:18:53.834209 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:18:53.834218 | orchestrator | 2026-03-28 01:18:53.834228 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:18:53.834239 | orchestrator | Saturday 28 March 2026 01:09:39 +0000 (0:00:00.818) 0:00:03.382 ******** 2026-03-28 01:18:53.834261 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-28 01:18:53.834311 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-28 01:18:53.834321 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-28 01:18:53.834331 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-28 01:18:53.834341 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-28 01:18:53.834352 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-28 01:18:53.834362 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-28 01:18:53.834381 | orchestrator | 2026-03-28 01:18:53.834391 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-28 01:18:53.834401 | orchestrator | 2026-03-28 01:18:53.834434 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-28 01:18:53.834444 | orchestrator | Saturday 28 March 2026 01:09:41 +0000 (0:00:01.657) 0:00:05.039 ******** 2026-03-28 01:18:53.834454 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:18:53.834464 | orchestrator | 2026-03-28 01:18:53.834474 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-28 01:18:53.834497 | orchestrator | Saturday 28 March 2026 01:09:42 +0000 (0:00:01.042) 0:00:06.081 ******** 2026-03-28 01:18:53.834507 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-28 01:18:53.834516 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-28 01:18:53.834525 | orchestrator | 2026-03-28 01:18:53.834533 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-28 01:18:53.834550 | orchestrator | Saturday 28 March 2026 01:09:47 +0000 (0:00:05.473) 0:00:11.555 ******** 2026-03-28 01:18:53.834559 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:18:53.834568 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:18:53.834576 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.834585 | orchestrator | 2026-03-28 01:18:53.834648 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-28 01:18:53.834658 | orchestrator | Saturday 28 March 2026 01:09:52 +0000 (0:00:05.072) 0:00:16.627 ******** 2026-03-28 01:18:53.834667 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.834675 | orchestrator | 2026-03-28 01:18:53.834716 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-28 01:18:53.834730 | orchestrator | Saturday 28 March 2026 01:09:54 +0000 (0:00:01.248) 0:00:17.876 ******** 2026-03-28 01:18:53.834745 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.834760 | orchestrator | 2026-03-28 01:18:53.834775 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-28 01:18:53.834821 | orchestrator | Saturday 28 March 2026 01:09:56 +0000 (0:00:02.250) 0:00:20.127 ******** 2026-03-28 01:18:53.834832 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.834840 | orchestrator | 2026-03-28 01:18:53.834850 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 01:18:53.834859 | orchestrator | Saturday 28 March 2026 01:10:00 +0000 (0:00:04.283) 0:00:24.411 ******** 2026-03-28 01:18:53.834867 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.834876 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.834911 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.834920 | orchestrator | 2026-03-28 01:18:53.834929 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-28 01:18:53.834970 | orchestrator | Saturday 28 March 2026 01:10:01 +0000 (0:00:00.890) 0:00:25.302 ******** 2026-03-28 01:18:53.834986 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:53.834999 | orchestrator | 2026-03-28 01:18:53.835008 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-28 01:18:53.835017 | orchestrator | Saturday 28 March 2026 01:10:38 +0000 (0:00:36.969) 0:01:02.271 ******** 2026-03-28 01:18:53.835025 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.835034 | orchestrator | 2026-03-28 01:18:53.835045 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-28 01:18:53.835056 | orchestrator | Saturday 28 March 2026 01:10:54 +0000 (0:00:16.454) 0:01:18.726 ******** 2026-03-28 01:18:53.835066 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:53.835077 | orchestrator | 2026-03-28 01:18:53.835088 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-28 01:18:53.835099 | orchestrator | Saturday 28 March 2026 01:11:08 +0000 (0:00:13.405) 0:01:32.132 ******** 2026-03-28 01:18:53.835127 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:53.835139 | orchestrator | 2026-03-28 01:18:53.835150 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-28 01:18:53.835161 | orchestrator | Saturday 28 March 2026 01:11:09 +0000 (0:00:01.295) 0:01:33.427 ******** 2026-03-28 01:18:53.835172 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.835183 | orchestrator | 2026-03-28 01:18:53.835194 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 01:18:53.835205 | orchestrator | Saturday 28 March 2026 01:11:10 +0000 (0:00:00.493) 0:01:33.921 ******** 2026-03-28 01:18:53.835216 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:18:53.835227 | orchestrator | 2026-03-28 01:18:53.835238 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-28 01:18:53.835249 | orchestrator | Saturday 28 March 2026 01:11:10 +0000 (0:00:00.689) 0:01:34.610 ******** 2026-03-28 01:18:53.835282 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:53.835318 | orchestrator | 2026-03-28 01:18:53.835330 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-28 01:18:53.835340 | orchestrator | Saturday 28 March 2026 01:11:31 +0000 (0:00:20.282) 0:01:54.893 ******** 2026-03-28 01:18:53.835351 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.835362 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.835373 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.835384 | orchestrator | 2026-03-28 01:18:53.835394 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-28 01:18:53.835405 | orchestrator | 2026-03-28 01:18:53.835424 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-28 01:18:53.835436 | orchestrator | Saturday 28 March 2026 01:11:31 +0000 (0:00:00.450) 0:01:55.344 ******** 2026-03-28 01:18:53.835447 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:18:53.835457 | orchestrator | 2026-03-28 01:18:53.835468 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-28 01:18:53.835479 | orchestrator | Saturday 28 March 2026 01:11:32 +0000 (0:00:00.696) 0:01:56.040 ******** 2026-03-28 01:18:53.835490 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.835500 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.835511 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.835522 | orchestrator | 2026-03-28 01:18:53.835533 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-28 01:18:53.835544 | orchestrator | Saturday 28 March 2026 01:11:34 +0000 (0:00:02.645) 0:01:58.685 ******** 2026-03-28 01:18:53.835555 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.835565 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.835576 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.835587 | orchestrator | 2026-03-28 01:18:53.835607 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-28 01:18:53.835618 | orchestrator | Saturday 28 March 2026 01:11:37 +0000 (0:00:02.584) 0:02:01.269 ******** 2026-03-28 01:18:53.835629 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.835639 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.835650 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.835661 | orchestrator | 2026-03-28 01:18:53.835816 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-28 01:18:53.835830 | orchestrator | Saturday 28 March 2026 01:11:38 +0000 (0:00:00.781) 0:02:02.051 ******** 2026-03-28 01:18:53.835841 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 01:18:53.835852 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 01:18:53.835863 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.835873 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.835940 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-28 01:18:53.835954 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-28 01:18:53.835965 | orchestrator | 2026-03-28 01:18:53.835976 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-28 01:18:53.835987 | orchestrator | Saturday 28 March 2026 01:11:47 +0000 (0:00:08.822) 0:02:10.873 ******** 2026-03-28 01:18:53.835998 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.836009 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.836019 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.836030 | orchestrator | 2026-03-28 01:18:53.836041 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-28 01:18:53.836052 | orchestrator | Saturday 28 March 2026 01:11:47 +0000 (0:00:00.473) 0:02:11.347 ******** 2026-03-28 01:18:53.836063 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 01:18:53.836074 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.836084 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 01:18:53.836095 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.836106 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 01:18:53.836139 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.836150 | orchestrator | 2026-03-28 01:18:53.836161 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-28 01:18:53.836171 | orchestrator | Saturday 28 March 2026 01:11:48 +0000 (0:00:00.704) 0:02:12.051 ******** 2026-03-28 01:18:53.836182 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.836193 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.836203 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.836214 | orchestrator | 2026-03-28 01:18:53.836225 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-28 01:18:53.836236 | orchestrator | Saturday 28 March 2026 01:11:48 +0000 (0:00:00.717) 0:02:12.768 ******** 2026-03-28 01:18:53.836247 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.836257 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.836268 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.836278 | orchestrator | 2026-03-28 01:18:53.836289 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-28 01:18:53.836300 | orchestrator | Saturday 28 March 2026 01:11:50 +0000 (0:00:01.052) 0:02:13.821 ******** 2026-03-28 01:18:53.836311 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.836322 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.836341 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.836353 | orchestrator | 2026-03-28 01:18:53.836364 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-28 01:18:53.836375 | orchestrator | Saturday 28 March 2026 01:11:52 +0000 (0:00:02.319) 0:02:16.141 ******** 2026-03-28 01:18:53.836386 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.836396 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.836407 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:53.836427 | orchestrator | 2026-03-28 01:18:53.836439 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-28 01:18:53.836449 | orchestrator | Saturday 28 March 2026 01:12:15 +0000 (0:00:23.436) 0:02:39.577 ******** 2026-03-28 01:18:53.836460 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.836470 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.836481 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:53.836492 | orchestrator | 2026-03-28 01:18:53.836503 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-28 01:18:53.836514 | orchestrator | Saturday 28 March 2026 01:12:30 +0000 (0:00:14.907) 0:02:54.485 ******** 2026-03-28 01:18:53.836524 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:53.836535 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.836545 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.836556 | orchestrator | 2026-03-28 01:18:53.836567 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-28 01:18:53.836578 | orchestrator | Saturday 28 March 2026 01:12:31 +0000 (0:00:01.245) 0:02:55.730 ******** 2026-03-28 01:18:53.836589 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.836599 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.836617 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.836628 | orchestrator | 2026-03-28 01:18:53.836638 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-28 01:18:53.836649 | orchestrator | Saturday 28 March 2026 01:12:46 +0000 (0:00:14.653) 0:03:10.383 ******** 2026-03-28 01:18:53.836660 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.836671 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.836682 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.836692 | orchestrator | 2026-03-28 01:18:53.836703 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-28 01:18:53.836714 | orchestrator | Saturday 28 March 2026 01:12:47 +0000 (0:00:01.199) 0:03:11.582 ******** 2026-03-28 01:18:53.836725 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.836736 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.836746 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.836757 | orchestrator | 2026-03-28 01:18:53.836768 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-28 01:18:53.836778 | orchestrator | 2026-03-28 01:18:53.836789 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 01:18:53.836800 | orchestrator | Saturday 28 March 2026 01:12:48 +0000 (0:00:00.602) 0:03:12.185 ******** 2026-03-28 01:18:53.836811 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:18:53.836822 | orchestrator | 2026-03-28 01:18:53.836833 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-28 01:18:53.836844 | orchestrator | Saturday 28 March 2026 01:12:49 +0000 (0:00:00.635) 0:03:12.820 ******** 2026-03-28 01:18:53.836854 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-28 01:18:53.836865 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-28 01:18:53.836876 | orchestrator | 2026-03-28 01:18:53.836954 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-28 01:18:53.836974 | orchestrator | Saturday 28 March 2026 01:12:52 +0000 (0:00:03.481) 0:03:16.302 ******** 2026-03-28 01:18:53.837008 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-28 01:18:53.837023 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-28 01:18:53.837034 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-28 01:18:53.837045 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-28 01:18:53.837056 | orchestrator | 2026-03-28 01:18:53.837076 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-28 01:18:53.837087 | orchestrator | Saturday 28 March 2026 01:12:59 +0000 (0:00:06.904) 0:03:23.206 ******** 2026-03-28 01:18:53.837098 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:18:53.837108 | orchestrator | 2026-03-28 01:18:53.837119 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-28 01:18:53.837130 | orchestrator | Saturday 28 March 2026 01:13:02 +0000 (0:00:03.469) 0:03:26.676 ******** 2026-03-28 01:18:53.837140 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:18:53.837151 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-28 01:18:53.837162 | orchestrator | 2026-03-28 01:18:53.837173 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-28 01:18:53.837184 | orchestrator | Saturday 28 March 2026 01:13:07 +0000 (0:00:04.315) 0:03:30.991 ******** 2026-03-28 01:18:53.837195 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:18:53.837205 | orchestrator | 2026-03-28 01:18:53.837216 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-28 01:18:53.837227 | orchestrator | Saturday 28 March 2026 01:13:10 +0000 (0:00:03.454) 0:03:34.446 ******** 2026-03-28 01:18:53.837238 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-28 01:18:53.837249 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-28 01:18:53.837260 | orchestrator | 2026-03-28 01:18:53.837270 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-28 01:18:53.837301 | orchestrator | Saturday 28 March 2026 01:13:18 +0000 (0:00:07.980) 0:03:42.426 ******** 2026-03-28 01:18:53.837327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.837346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.837366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.837389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.837403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.837421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.837433 | orchestrator | 2026-03-28 01:18:53.837444 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-28 01:18:53.837456 | orchestrator | Saturday 28 March 2026 01:13:20 +0000 (0:00:01.458) 0:03:43.885 ******** 2026-03-28 01:18:53.837466 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.837476 | orchestrator | 2026-03-28 01:18:53.837486 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-28 01:18:53.837496 | orchestrator | Saturday 28 March 2026 01:13:20 +0000 (0:00:00.192) 0:03:44.077 ******** 2026-03-28 01:18:53.837506 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.837516 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.837532 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.837542 | orchestrator | 2026-03-28 01:18:53.837552 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-28 01:18:53.837562 | orchestrator | Saturday 28 March 2026 01:13:20 +0000 (0:00:00.322) 0:03:44.400 ******** 2026-03-28 01:18:53.837572 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:18:53.837582 | orchestrator | 2026-03-28 01:18:53.837592 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-28 01:18:53.837602 | orchestrator | Saturday 28 March 2026 01:13:21 +0000 (0:00:01.044) 0:03:45.445 ******** 2026-03-28 01:18:53.837612 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.837621 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.837631 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.837641 | orchestrator | 2026-03-28 01:18:53.837651 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 01:18:53.837661 | orchestrator | Saturday 28 March 2026 01:13:22 +0000 (0:00:00.355) 0:03:45.800 ******** 2026-03-28 01:18:53.837671 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:18:53.837681 | orchestrator | 2026-03-28 01:18:53.837691 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-28 01:18:53.837700 | orchestrator | Saturday 28 March 2026 01:13:22 +0000 (0:00:00.628) 0:03:46.429 ******** 2026-03-28 01:18:53.837718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.837735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.837747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.837765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.837776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.837794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.837805 | orchestrator | 2026-03-28 01:18:53.837815 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-28 01:18:53.837825 | orchestrator | Saturday 28 March 2026 01:13:25 +0000 (0:00:02.896) 0:03:49.326 ******** 2026-03-28 01:18:53.837846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:18:53.837863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.837874 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.837909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:18:53.837929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.837939 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.837954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:18:53.837971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.837982 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.837991 | orchestrator | 2026-03-28 01:18:53.838001 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-28 01:18:53.838011 | orchestrator | Saturday 28 March 2026 01:13:26 +0000 (0:00:00.618) 0:03:49.944 ******** 2026-03-28 01:18:53.838063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:18:53.838075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.838085 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.838108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:18:53.838127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.838137 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.838148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:18:53.838159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.838168 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.838178 | orchestrator | 2026-03-28 01:18:53.838188 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-28 01:18:53.838198 | orchestrator | Saturday 28 March 2026 01:13:27 +0000 (0:00:00.836) 0:03:50.781 ******** 2026-03-28 01:18:53.838215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.838239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.838251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.838296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.838307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.838328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.838339 | orchestrator | 2026-03-28 01:18:53.838349 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-28 01:18:53.838359 | orchestrator | Saturday 28 March 2026 01:13:29 +0000 (0:00:02.692) 0:03:53.474 ******** 2026-03-28 01:18:53.838369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.838380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.838398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.838419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.838430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.838441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.838451 | orchestrator | 2026-03-28 01:18:53.838461 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-28 01:18:53.838471 | orchestrator | Saturday 28 March 2026 01:13:35 +0000 (0:00:05.807) 0:03:59.281 ******** 2026-03-28 01:18:53.838494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:18:53.838511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.838521 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.838536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:18:53.838547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.838557 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.838568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:18:53.838592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.838603 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.838613 | orchestrator | 2026-03-28 01:18:53.838623 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-28 01:18:53.838633 | orchestrator | Saturday 28 March 2026 01:13:36 +0000 (0:00:00.624) 0:03:59.906 ******** 2026-03-28 01:18:53.838643 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.838652 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:53.838662 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:53.838672 | orchestrator | 2026-03-28 01:18:53.838682 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-28 01:18:53.838691 | orchestrator | Saturday 28 March 2026 01:13:37 +0000 (0:00:01.687) 0:04:01.593 ******** 2026-03-28 01:18:53.838701 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.838711 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.838720 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.838730 | orchestrator | 2026-03-28 01:18:53.838740 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-28 01:18:53.838757 | orchestrator | Saturday 28 March 2026 01:13:38 +0000 (0:00:00.392) 0:04:01.986 ******** 2026-03-28 01:18:53.838768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.838779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.838805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:53.838821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.838832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.838842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.838852 | orchestrator | 2026-03-28 01:18:53.838862 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-28 01:18:53.838871 | orchestrator | Saturday 28 March 2026 01:13:40 +0000 (0:00:02.234) 0:04:04.220 ******** 2026-03-28 01:18:53.838881 | orchestrator | 2026-03-28 01:18:53.838950 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-28 01:18:53.838961 | orchestrator | Saturday 28 March 2026 01:13:40 +0000 (0:00:00.147) 0:04:04.368 ******** 2026-03-28 01:18:53.838971 | orchestrator | 2026-03-28 01:18:53.838980 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-28 01:18:53.838997 | orchestrator | Saturday 28 March 2026 01:13:40 +0000 (0:00:00.143) 0:04:04.511 ******** 2026-03-28 01:18:53.839006 | orchestrator | 2026-03-28 01:18:53.839017 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-28 01:18:53.839053 | orchestrator | Saturday 28 March 2026 01:13:40 +0000 (0:00:00.140) 0:04:04.652 ******** 2026-03-28 01:18:53.839075 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.839096 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:53.839112 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:53.839128 | orchestrator | 2026-03-28 01:18:53.839143 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-28 01:18:53.839159 | orchestrator | Saturday 28 March 2026 01:14:03 +0000 (0:00:23.001) 0:04:27.654 ******** 2026-03-28 01:18:53.839175 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.839191 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:53.839205 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:53.839217 | orchestrator | 2026-03-28 01:18:53.839230 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-28 01:18:53.839243 | orchestrator | 2026-03-28 01:18:53.839255 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:18:53.839268 | orchestrator | Saturday 28 March 2026 01:14:15 +0000 (0:00:11.515) 0:04:39.169 ******** 2026-03-28 01:18:53.839276 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:18:53.839285 | orchestrator | 2026-03-28 01:18:53.839301 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:18:53.839309 | orchestrator | Saturday 28 March 2026 01:14:16 +0000 (0:00:01.341) 0:04:40.510 ******** 2026-03-28 01:18:53.839317 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.839325 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.839333 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.839340 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.839348 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.839356 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.839363 | orchestrator | 2026-03-28 01:18:53.839371 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-28 01:18:53.839379 | orchestrator | Saturday 28 March 2026 01:14:17 +0000 (0:00:00.678) 0:04:41.189 ******** 2026-03-28 01:18:53.839387 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.839394 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.839402 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.839410 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:18:53.839418 | orchestrator | 2026-03-28 01:18:53.839426 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 01:18:53.839434 | orchestrator | Saturday 28 March 2026 01:14:18 +0000 (0:00:01.499) 0:04:42.689 ******** 2026-03-28 01:18:53.839442 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-28 01:18:53.839450 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-28 01:18:53.839458 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-28 01:18:53.839466 | orchestrator | 2026-03-28 01:18:53.839474 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 01:18:53.839488 | orchestrator | Saturday 28 March 2026 01:14:19 +0000 (0:00:00.870) 0:04:43.559 ******** 2026-03-28 01:18:53.839496 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-28 01:18:53.839503 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-28 01:18:53.839511 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-28 01:18:53.839519 | orchestrator | 2026-03-28 01:18:53.839527 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 01:18:53.839535 | orchestrator | Saturday 28 March 2026 01:14:21 +0000 (0:00:01.410) 0:04:44.970 ******** 2026-03-28 01:18:53.839550 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-28 01:18:53.839558 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.839566 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-28 01:18:53.839574 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.839581 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-28 01:18:53.839589 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.839597 | orchestrator | 2026-03-28 01:18:53.839605 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-28 01:18:53.839613 | orchestrator | Saturday 28 March 2026 01:14:21 +0000 (0:00:00.603) 0:04:45.573 ******** 2026-03-28 01:18:53.839621 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 01:18:53.839628 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 01:18:53.839636 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.839644 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 01:18:53.839652 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 01:18:53.839660 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.839667 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-28 01:18:53.839675 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 01:18:53.839683 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-28 01:18:53.839691 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 01:18:53.839699 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.839706 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-28 01:18:53.839714 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-28 01:18:53.839722 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-28 01:18:53.839730 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-28 01:18:53.839738 | orchestrator | 2026-03-28 01:18:53.839745 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-28 01:18:53.839753 | orchestrator | Saturday 28 March 2026 01:14:24 +0000 (0:00:02.272) 0:04:47.846 ******** 2026-03-28 01:18:53.839761 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.839769 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.839776 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.839784 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:18:53.839792 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:18:53.839800 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:18:53.839808 | orchestrator | 2026-03-28 01:18:53.839816 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-28 01:18:53.839823 | orchestrator | Saturday 28 March 2026 01:14:25 +0000 (0:00:01.221) 0:04:49.067 ******** 2026-03-28 01:18:53.839831 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.839839 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.839847 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.839854 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:18:53.839862 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:18:53.839870 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:18:53.839878 | orchestrator | 2026-03-28 01:18:53.839909 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-28 01:18:53.839918 | orchestrator | Saturday 28 March 2026 01:14:27 +0000 (0:00:02.032) 0:04:51.099 ******** 2026-03-28 01:18:53.839932 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:18:53.839951 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:18:53.839960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:18:53.839968 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:18:53.839977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:18:53.839991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840023 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840143 | orchestrator | 2026-03-28 01:18:53.840156 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:18:53.840168 | orchestrator | Saturday 28 March 2026 01:14:29 +0000 (0:00:02.178) 0:04:53.278 ******** 2026-03-28 01:18:53.840176 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:18:53.840185 | orchestrator | 2026-03-28 01:18:53.840193 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-28 01:18:53.840201 | orchestrator | Saturday 28 March 2026 01:14:30 +0000 (0:00:01.275) 0:04:54.554 ******** 2026-03-28 01:18:53.840209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840224 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840277 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840296 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.840381 | orchestrator | 2026-03-28 01:18:53.840389 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-28 01:18:53.840397 | orchestrator | Saturday 28 March 2026 01:14:34 +0000 (0:00:03.864) 0:04:58.418 ******** 2026-03-28 01:18:53.840409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:18:53.840418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:18:53.840426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.840440 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.840449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:18:53.840463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.840471 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.840483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:18:53.840492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:18:53.840500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.840508 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.840522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:18:53.840535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:18:53.840543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.840552 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.840563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:18:53.840572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:18:53.840580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.840593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.840601 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.840609 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.840617 | orchestrator | 2026-03-28 01:18:53.840625 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-28 01:18:53.840633 | orchestrator | Saturday 28 March 2026 01:14:36 +0000 (0:00:01.687) 0:05:00.106 ******** 2026-03-28 01:18:53.840647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:18:53.840656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:18:53.840668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.840676 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.840685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:18:53.840698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:18:53.840711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.840719 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.840727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:18:53.840739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:18:53.840747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.840761 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.840769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:18:53.840777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.840786 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.840799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/2026-03-28 01:18:53 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:53.840807 | orchestrator | 2026-03-28 01:18:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:53.840816 | orchestrator | log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:18:53.840825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.840833 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.840845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:18:53.840853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.840867 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.840874 | orchestrator | 2026-03-28 01:18:53.840883 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:18:53.840916 | orchestrator | Saturday 28 March 2026 01:14:39 +0000 (0:00:02.724) 0:05:02.830 ******** 2026-03-28 01:18:53.840924 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.840932 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.840940 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.840948 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:18:53.840956 | orchestrator | 2026-03-28 01:18:53.840963 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-28 01:18:53.840971 | orchestrator | Saturday 28 March 2026 01:14:40 +0000 (0:00:01.175) 0:05:04.006 ******** 2026-03-28 01:18:53.840979 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:18:53.840987 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 01:18:53.840994 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 01:18:53.841002 | orchestrator | 2026-03-28 01:18:53.841010 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-28 01:18:53.841018 | orchestrator | Saturday 28 March 2026 01:14:41 +0000 (0:00:01.057) 0:05:05.063 ******** 2026-03-28 01:18:53.841026 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:18:53.841034 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 01:18:53.841041 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 01:18:53.841049 | orchestrator | 2026-03-28 01:18:53.841057 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-28 01:18:53.841065 | orchestrator | Saturday 28 March 2026 01:14:42 +0000 (0:00:01.033) 0:05:06.096 ******** 2026-03-28 01:18:53.841073 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:18:53.841081 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:18:53.841089 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:18:53.841097 | orchestrator | 2026-03-28 01:18:53.841105 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-28 01:18:53.841113 | orchestrator | Saturday 28 March 2026 01:14:42 +0000 (0:00:00.520) 0:05:06.616 ******** 2026-03-28 01:18:53.841120 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:18:53.841128 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:18:53.841136 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:18:53.841144 | orchestrator | 2026-03-28 01:18:53.841152 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-28 01:18:53.841160 | orchestrator | Saturday 28 March 2026 01:14:43 +0000 (0:00:00.896) 0:05:07.513 ******** 2026-03-28 01:18:53.841167 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-28 01:18:53.841180 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-28 01:18:53.841188 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-28 01:18:53.841196 | orchestrator | 2026-03-28 01:18:53.841204 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-28 01:18:53.841212 | orchestrator | Saturday 28 March 2026 01:14:45 +0000 (0:00:01.292) 0:05:08.805 ******** 2026-03-28 01:18:53.841220 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-28 01:18:53.841228 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-28 01:18:53.841236 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-28 01:18:53.841244 | orchestrator | 2026-03-28 01:18:53.841252 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-28 01:18:53.841266 | orchestrator | Saturday 28 March 2026 01:14:46 +0000 (0:00:01.222) 0:05:10.028 ******** 2026-03-28 01:18:53.841274 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-28 01:18:53.841282 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-28 01:18:53.841289 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-28 01:18:53.841297 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-28 01:18:53.841305 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-28 01:18:53.841313 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-28 01:18:53.841320 | orchestrator | 2026-03-28 01:18:53.841328 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-28 01:18:53.841336 | orchestrator | Saturday 28 March 2026 01:14:50 +0000 (0:00:04.056) 0:05:14.085 ******** 2026-03-28 01:18:53.841351 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.841359 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.841367 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.841374 | orchestrator | 2026-03-28 01:18:53.841382 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-28 01:18:53.841390 | orchestrator | Saturday 28 March 2026 01:14:50 +0000 (0:00:00.605) 0:05:14.690 ******** 2026-03-28 01:18:53.841398 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.841406 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.841485 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.841495 | orchestrator | 2026-03-28 01:18:53.841503 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-28 01:18:53.841511 | orchestrator | Saturday 28 March 2026 01:14:51 +0000 (0:00:00.342) 0:05:15.032 ******** 2026-03-28 01:18:53.841519 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:18:53.841527 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:18:53.841535 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:18:53.841543 | orchestrator | 2026-03-28 01:18:53.841551 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-28 01:18:53.841559 | orchestrator | Saturday 28 March 2026 01:14:52 +0000 (0:00:01.237) 0:05:16.270 ******** 2026-03-28 01:18:53.841567 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-28 01:18:53.841576 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-28 01:18:53.841584 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-28 01:18:53.841592 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-28 01:18:53.841600 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-28 01:18:53.841608 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-28 01:18:53.841616 | orchestrator | 2026-03-28 01:18:53.841624 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-28 01:18:53.841632 | orchestrator | Saturday 28 March 2026 01:14:56 +0000 (0:00:03.548) 0:05:19.818 ******** 2026-03-28 01:18:53.841640 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 01:18:53.841648 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 01:18:53.841656 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 01:18:53.841664 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 01:18:53.841672 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:18:53.841680 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 01:18:53.841688 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:18:53.841702 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 01:18:53.841710 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:18:53.841718 | orchestrator | 2026-03-28 01:18:53.841726 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-28 01:18:53.841733 | orchestrator | Saturday 28 March 2026 01:14:59 +0000 (0:00:03.562) 0:05:23.381 ******** 2026-03-28 01:18:53.841741 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.841749 | orchestrator | 2026-03-28 01:18:53.841757 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-28 01:18:53.841765 | orchestrator | Saturday 28 March 2026 01:14:59 +0000 (0:00:00.151) 0:05:23.533 ******** 2026-03-28 01:18:53.841773 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.841781 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.841789 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.841796 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.841804 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.841812 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.841820 | orchestrator | 2026-03-28 01:18:53.841828 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-28 01:18:53.841836 | orchestrator | Saturday 28 March 2026 01:15:00 +0000 (0:00:00.624) 0:05:24.158 ******** 2026-03-28 01:18:53.841844 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:18:53.841852 | orchestrator | 2026-03-28 01:18:53.841860 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-28 01:18:53.841868 | orchestrator | Saturday 28 March 2026 01:15:01 +0000 (0:00:00.742) 0:05:24.900 ******** 2026-03-28 01:18:53.841876 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.841932 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.841942 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.841950 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.841958 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.841966 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.841973 | orchestrator | 2026-03-28 01:18:53.841981 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-28 01:18:53.841989 | orchestrator | Saturday 28 March 2026 01:15:01 +0000 (0:00:00.867) 0:05:25.768 ******** 2026-03-28 01:18:53.842008 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842042 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842122 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842193 | orchestrator | 2026-03-28 01:18:53.842201 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-28 01:18:53.842209 | orchestrator | Saturday 28 March 2026 01:15:05 +0000 (0:00:03.805) 0:05:29.573 ******** 2026-03-28 01:18:53.842218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:18:53.842226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:18:53.842238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:18:53.842252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:18:53.842265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:18:53.842274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:18:53.842282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842290 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.842394 | orchestrator | 2026-03-28 01:18:53.842401 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-28 01:18:53.842408 | orchestrator | Saturday 28 March 2026 01:15:12 +0000 (0:00:07.073) 0:05:36.646 ******** 2026-03-28 01:18:53.842420 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.842427 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.842437 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.842444 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.842450 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.842457 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.842464 | orchestrator | 2026-03-28 01:18:53.842470 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-28 01:18:53.842477 | orchestrator | Saturday 28 March 2026 01:15:14 +0000 (0:00:01.440) 0:05:38.087 ******** 2026-03-28 01:18:53.842483 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-28 01:18:53.842490 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-28 01:18:53.842497 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-28 01:18:53.842503 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-28 01:18:53.842510 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.842517 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-28 01:18:53.842523 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-28 01:18:53.842530 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.842537 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-28 01:18:53.842543 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.842550 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-28 01:18:53.842556 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-28 01:18:53.842563 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-28 01:18:53.842569 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-28 01:18:53.842576 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-28 01:18:53.842583 | orchestrator | 2026-03-28 01:18:53.842589 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-28 01:18:53.842596 | orchestrator | Saturday 28 March 2026 01:15:18 +0000 (0:00:03.963) 0:05:42.050 ******** 2026-03-28 01:18:53.842602 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.842609 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.842615 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.842622 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.842629 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.842635 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.842642 | orchestrator | 2026-03-28 01:18:53.842649 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-28 01:18:53.842655 | orchestrator | Saturday 28 March 2026 01:15:18 +0000 (0:00:00.622) 0:05:42.673 ******** 2026-03-28 01:18:53.842662 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-28 01:18:53.842669 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-28 01:18:53.842676 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-28 01:18:53.842682 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-28 01:18:53.842689 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-28 01:18:53.842700 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-28 01:18:53.842707 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-28 01:18:53.842713 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-28 01:18:53.842720 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-28 01:18:53.842726 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-28 01:18:53.842732 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.842739 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-28 01:18:53.842745 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.842752 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-28 01:18:53.842759 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.842771 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:18:53.842778 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:18:53.842785 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:18:53.842795 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:18:53.842802 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:18:53.842808 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:18:53.842815 | orchestrator | 2026-03-28 01:18:53.842821 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-28 01:18:53.842828 | orchestrator | Saturday 28 March 2026 01:15:24 +0000 (0:00:05.639) 0:05:48.312 ******** 2026-03-28 01:18:53.842834 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 01:18:53.842841 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 01:18:53.842848 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 01:18:53.842854 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:18:53.842861 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-28 01:18:53.842867 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:18:53.842874 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-28 01:18:53.842881 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-28 01:18:53.842907 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:18:53.842916 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 01:18:53.842922 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 01:18:53.842929 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 01:18:53.842935 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-28 01:18:53.842942 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.842948 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-28 01:18:53.842960 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.842966 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-28 01:18:53.842973 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.842979 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:18:53.842986 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:18:53.842993 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:18:53.842999 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:18:53.843006 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:18:53.843013 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:18:53.843019 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:18:53.843026 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:18:53.843032 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:18:53.843039 | orchestrator | 2026-03-28 01:18:53.843046 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-28 01:18:53.843053 | orchestrator | Saturday 28 March 2026 01:15:31 +0000 (0:00:07.441) 0:05:55.754 ******** 2026-03-28 01:18:53.843059 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.843066 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.843073 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.843079 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.843086 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.843092 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.843099 | orchestrator | 2026-03-28 01:18:53.843105 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-28 01:18:53.843112 | orchestrator | Saturday 28 March 2026 01:15:32 +0000 (0:00:00.831) 0:05:56.586 ******** 2026-03-28 01:18:53.843119 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.843125 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.843132 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.843138 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.843145 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.843151 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.843158 | orchestrator | 2026-03-28 01:18:53.843164 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-28 01:18:53.843175 | orchestrator | Saturday 28 March 2026 01:15:33 +0000 (0:00:00.616) 0:05:57.202 ******** 2026-03-28 01:18:53.843181 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.843188 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.843194 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.843201 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:18:53.843208 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:18:53.843214 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:18:53.843221 | orchestrator | 2026-03-28 01:18:53.843227 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-28 01:18:53.843237 | orchestrator | Saturday 28 March 2026 01:15:35 +0000 (0:00:02.418) 0:05:59.620 ******** 2026-03-28 01:18:53.843245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:18:53.843256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.843263 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.843270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:18:53.843278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:18:53.843285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:18:53.843300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:18:53.843308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.843319 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.843326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.843333 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.843340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:18:53.843347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:18:53.843361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.843369 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.843376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:18:53.843388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.843395 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.843402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:18:53.843409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:18:53.843416 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.843422 | orchestrator | 2026-03-28 01:18:53.843429 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-28 01:18:53.843436 | orchestrator | Saturday 28 March 2026 01:15:37 +0000 (0:00:01.732) 0:06:01.352 ******** 2026-03-28 01:18:53.843443 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-28 01:18:53.843449 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-28 01:18:53.843456 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.843463 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-28 01:18:53.843469 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-28 01:18:53.843476 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.843483 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-28 01:18:53.843489 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-28 01:18:53.843496 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.843502 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-28 01:18:53.843509 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-28 01:18:53.843516 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.843522 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-28 01:18:53.843529 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-28 01:18:53.843535 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.843547 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-28 01:18:53.843553 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-28 01:18:53.843563 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.843570 | orchestrator | 2026-03-28 01:18:53.843577 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-28 01:18:53.843584 | orchestrator | Saturday 28 March 2026 01:15:38 +0000 (0:00:00.918) 0:06:02.271 ******** 2026-03-28 01:18:53.843595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843704 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843719 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:53.843726 | orchestrator | 2026-03-28 01:18:53.843732 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:18:53.843739 | orchestrator | Saturday 28 March 2026 01:15:41 +0000 (0:00:03.124) 0:06:05.395 ******** 2026-03-28 01:18:53.843746 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.843752 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.843759 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.843766 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.843772 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.843779 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.843785 | orchestrator | 2026-03-28 01:18:53.843792 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:18:53.843798 | orchestrator | Saturday 28 March 2026 01:15:42 +0000 (0:00:00.798) 0:06:06.193 ******** 2026-03-28 01:18:53.843809 | orchestrator | 2026-03-28 01:18:53.843816 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:18:53.843822 | orchestrator | Saturday 28 March 2026 01:15:42 +0000 (0:00:00.137) 0:06:06.331 ******** 2026-03-28 01:18:53.843829 | orchestrator | 2026-03-28 01:18:53.843835 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:18:53.843842 | orchestrator | Saturday 28 March 2026 01:15:42 +0000 (0:00:00.133) 0:06:06.464 ******** 2026-03-28 01:18:53.843848 | orchestrator | 2026-03-28 01:18:53.843855 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:18:53.843862 | orchestrator | Saturday 28 March 2026 01:15:42 +0000 (0:00:00.142) 0:06:06.607 ******** 2026-03-28 01:18:53.843868 | orchestrator | 2026-03-28 01:18:53.843875 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:18:53.843881 | orchestrator | Saturday 28 March 2026 01:15:42 +0000 (0:00:00.136) 0:06:06.743 ******** 2026-03-28 01:18:53.843902 | orchestrator | 2026-03-28 01:18:53.843909 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:18:53.843916 | orchestrator | Saturday 28 March 2026 01:15:43 +0000 (0:00:00.141) 0:06:06.884 ******** 2026-03-28 01:18:53.843922 | orchestrator | 2026-03-28 01:18:53.843929 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-28 01:18:53.843936 | orchestrator | Saturday 28 March 2026 01:15:43 +0000 (0:00:00.367) 0:06:07.252 ******** 2026-03-28 01:18:53.843942 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.843949 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:53.843955 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:53.843962 | orchestrator | 2026-03-28 01:18:53.843974 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-28 01:18:53.843981 | orchestrator | Saturday 28 March 2026 01:15:51 +0000 (0:00:08.191) 0:06:15.444 ******** 2026-03-28 01:18:53.843988 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.843994 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:53.844001 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:53.844007 | orchestrator | 2026-03-28 01:18:53.844014 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-28 01:18:53.844024 | orchestrator | Saturday 28 March 2026 01:16:06 +0000 (0:00:14.663) 0:06:30.107 ******** 2026-03-28 01:18:53.844031 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:18:53.844038 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:18:53.844044 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:18:53.844051 | orchestrator | 2026-03-28 01:18:53.844057 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-28 01:18:53.844064 | orchestrator | Saturday 28 March 2026 01:16:28 +0000 (0:00:21.699) 0:06:51.806 ******** 2026-03-28 01:18:53.844071 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:18:53.844077 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:18:53.844084 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:18:53.844090 | orchestrator | 2026-03-28 01:18:53.844097 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-28 01:18:53.844103 | orchestrator | Saturday 28 March 2026 01:17:01 +0000 (0:00:33.346) 0:07:25.153 ******** 2026-03-28 01:18:53.844110 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-28 01:18:53.844117 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:18:53.844123 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-03-28 01:18:53.844130 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:18:53.844137 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:18:53.844143 | orchestrator | 2026-03-28 01:18:53.844150 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-28 01:18:53.844157 | orchestrator | Saturday 28 March 2026 01:17:07 +0000 (0:00:06.379) 0:07:31.532 ******** 2026-03-28 01:18:53.844163 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:18:53.844175 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:18:53.844181 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:18:53.844188 | orchestrator | 2026-03-28 01:18:53.844194 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-28 01:18:53.844201 | orchestrator | Saturday 28 March 2026 01:17:08 +0000 (0:00:00.776) 0:07:32.308 ******** 2026-03-28 01:18:53.844207 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:18:53.844214 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:18:53.844221 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:18:53.844227 | orchestrator | 2026-03-28 01:18:53.844234 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-28 01:18:53.844241 | orchestrator | Saturday 28 March 2026 01:17:38 +0000 (0:00:29.492) 0:08:01.801 ******** 2026-03-28 01:18:53.844247 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.844254 | orchestrator | 2026-03-28 01:18:53.844260 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-28 01:18:53.844267 | orchestrator | Saturday 28 March 2026 01:17:38 +0000 (0:00:00.143) 0:08:01.945 ******** 2026-03-28 01:18:53.844274 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.844280 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.844287 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.844294 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.844300 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.844307 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-28 01:18:53.844314 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:18:53.844321 | orchestrator | 2026-03-28 01:18:53.844327 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-28 01:18:53.844334 | orchestrator | Saturday 28 March 2026 01:17:59 +0000 (0:00:21.766) 0:08:23.711 ******** 2026-03-28 01:18:53.844341 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.844347 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.844354 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.844360 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.844367 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.844374 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.844380 | orchestrator | 2026-03-28 01:18:53.844387 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-28 01:18:53.844393 | orchestrator | Saturday 28 March 2026 01:18:10 +0000 (0:00:10.653) 0:08:34.365 ******** 2026-03-28 01:18:53.844400 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.844406 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.844413 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.844419 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.844426 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.844432 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-03-28 01:18:53.844439 | orchestrator | 2026-03-28 01:18:53.844445 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-28 01:18:53.844452 | orchestrator | Saturday 28 March 2026 01:18:15 +0000 (0:00:04.817) 0:08:39.182 ******** 2026-03-28 01:18:53.844459 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:18:53.844466 | orchestrator | 2026-03-28 01:18:53.844472 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-28 01:18:53.844479 | orchestrator | Saturday 28 March 2026 01:18:29 +0000 (0:00:14.234) 0:08:53.417 ******** 2026-03-28 01:18:53.844486 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:18:53.844492 | orchestrator | 2026-03-28 01:18:53.844499 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-28 01:18:53.844506 | orchestrator | Saturday 28 March 2026 01:18:31 +0000 (0:00:01.433) 0:08:54.851 ******** 2026-03-28 01:18:53.844512 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.844523 | orchestrator | 2026-03-28 01:18:53.844533 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-28 01:18:53.844541 | orchestrator | Saturday 28 March 2026 01:18:32 +0000 (0:00:01.505) 0:08:56.356 ******** 2026-03-28 01:18:53.844547 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:18:53.844554 | orchestrator | 2026-03-28 01:18:53.844560 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-28 01:18:53.844570 | orchestrator | Saturday 28 March 2026 01:18:45 +0000 (0:00:12.595) 0:09:08.951 ******** 2026-03-28 01:18:53.844577 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:18:53.844584 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:18:53.844590 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:18:53.844597 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:53.844604 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:18:53.844610 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:18:53.844617 | orchestrator | 2026-03-28 01:18:53.844624 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-28 01:18:53.844631 | orchestrator | 2026-03-28 01:18:53.844637 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-28 01:18:53.844644 | orchestrator | Saturday 28 March 2026 01:18:47 +0000 (0:00:01.989) 0:09:10.941 ******** 2026-03-28 01:18:53.844651 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:53.844657 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:53.844664 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:53.844671 | orchestrator | 2026-03-28 01:18:53.844677 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-28 01:18:53.844684 | orchestrator | 2026-03-28 01:18:53.844691 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-28 01:18:53.844698 | orchestrator | Saturday 28 March 2026 01:18:48 +0000 (0:00:01.308) 0:09:12.250 ******** 2026-03-28 01:18:53.844704 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.844711 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.844718 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.844724 | orchestrator | 2026-03-28 01:18:53.844731 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-28 01:18:53.844738 | orchestrator | 2026-03-28 01:18:53.844744 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-28 01:18:53.844751 | orchestrator | Saturday 28 March 2026 01:18:49 +0000 (0:00:00.588) 0:09:12.839 ******** 2026-03-28 01:18:53.844758 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-28 01:18:53.844764 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-28 01:18:53.844771 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-28 01:18:53.844778 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-28 01:18:53.844784 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-28 01:18:53.844791 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-28 01:18:53.844798 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:18:53.844805 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-28 01:18:53.844811 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-28 01:18:53.844818 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-28 01:18:53.844825 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-28 01:18:53.844831 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-28 01:18:53.844838 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-28 01:18:53.844845 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:18:53.844851 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-28 01:18:53.844858 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-28 01:18:53.844865 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-28 01:18:53.844875 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-28 01:18:53.844882 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-28 01:18:53.844926 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-28 01:18:53.844933 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:18:53.844940 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-28 01:18:53.844946 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-28 01:18:53.844953 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-28 01:18:53.844960 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-28 01:18:53.844966 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-28 01:18:53.844973 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-28 01:18:53.844980 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.844986 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-28 01:18:53.844993 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-28 01:18:53.845000 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-28 01:18:53.845006 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-28 01:18:53.845013 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-28 01:18:53.845020 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-28 01:18:53.845026 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.845033 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-28 01:18:53.845040 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-28 01:18:53.845046 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-28 01:18:53.845053 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-28 01:18:53.845059 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-28 01:18:53.845070 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-28 01:18:53.845077 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.845083 | orchestrator | 2026-03-28 01:18:53.845090 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-28 01:18:53.845097 | orchestrator | 2026-03-28 01:18:53.845104 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-28 01:18:53.845111 | orchestrator | Saturday 28 March 2026 01:18:50 +0000 (0:00:01.394) 0:09:14.233 ******** 2026-03-28 01:18:53.845117 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-28 01:18:53.845154 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-28 01:18:53.845162 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.845168 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-28 01:18:53.845175 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-28 01:18:53.845181 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.845188 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-28 01:18:53.845195 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-28 01:18:53.845201 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.845208 | orchestrator | 2026-03-28 01:18:53.845215 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-28 01:18:53.845221 | orchestrator | 2026-03-28 01:18:53.845228 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-28 01:18:53.845234 | orchestrator | Saturday 28 March 2026 01:18:51 +0000 (0:00:00.813) 0:09:15.046 ******** 2026-03-28 01:18:53.845241 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.845247 | orchestrator | 2026-03-28 01:18:53.845253 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-28 01:18:53.845259 | orchestrator | 2026-03-28 01:18:53.845265 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-28 01:18:53.845277 | orchestrator | Saturday 28 March 2026 01:18:51 +0000 (0:00:00.697) 0:09:15.744 ******** 2026-03-28 01:18:53.845283 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:53.845289 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:53.845296 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:53.845302 | orchestrator | 2026-03-28 01:18:53.845308 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:18:53.845314 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:18:53.845321 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-28 01:18:53.845327 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-28 01:18:53.845334 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-28 01:18:53.845340 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-28 01:18:53.845346 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-28 01:18:53.845352 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-28 01:18:53.845358 | orchestrator | 2026-03-28 01:18:53.845364 | orchestrator | 2026-03-28 01:18:53.845370 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:18:53.845377 | orchestrator | Saturday 28 March 2026 01:18:52 +0000 (0:00:00.445) 0:09:16.190 ******** 2026-03-28 01:18:53.845383 | orchestrator | =============================================================================== 2026-03-28 01:18:53.845389 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 36.97s 2026-03-28 01:18:53.845395 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 33.35s 2026-03-28 01:18:53.845401 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.49s 2026-03-28 01:18:53.845407 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.44s 2026-03-28 01:18:53.845414 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.00s 2026-03-28 01:18:53.845420 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.77s 2026-03-28 01:18:53.845426 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.70s 2026-03-28 01:18:53.845432 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.28s 2026-03-28 01:18:53.845438 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.45s 2026-03-28 01:18:53.845447 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.91s 2026-03-28 01:18:53.845453 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.66s 2026-03-28 01:18:53.845460 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.65s 2026-03-28 01:18:53.845466 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.23s 2026-03-28 01:18:53.845472 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.41s 2026-03-28 01:18:53.845478 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.60s 2026-03-28 01:18:53.845488 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.52s 2026-03-28 01:18:53.845494 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.65s 2026-03-28 01:18:53.845504 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.82s 2026-03-28 01:18:53.845510 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 8.19s 2026-03-28 01:18:53.845517 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.98s 2026-03-28 01:18:56.878184 | orchestrator | 2026-03-28 01:18:56 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:56.878288 | orchestrator | 2026-03-28 01:18:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:59.919951 | orchestrator | 2026-03-28 01:18:59 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:18:59.922350 | orchestrator | 2026-03-28 01:18:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:19:02.964195 | orchestrator | 2026-03-28 01:19:02 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:19:02.964295 | orchestrator | 2026-03-28 01:19:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:19:06.010348 | orchestrator | 2026-03-28 01:19:06 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:19:06.010447 | orchestrator | 2026-03-28 01:19:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:19:09.055447 | orchestrator | 2026-03-28 01:19:09 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:19:09.055597 | orchestrator | 2026-03-28 01:19:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:19:12.110266 | orchestrator | 2026-03-28 01:19:12 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:19:12.111093 | orchestrator | 2026-03-28 01:19:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:19:15.147652 | orchestrator | 2026-03-28 01:19:15 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:19:15.147753 | orchestrator | 2026-03-28 01:19:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:19:18.192854 | orchestrator | 2026-03-28 01:19:18 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state STARTED 2026-03-28 01:19:18.193084 | orchestrator | 2026-03-28 01:19:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:19:21.236602 | orchestrator | 2026-03-28 01:19:21 | INFO  | Task 8960f5b0-6f31-4695-a824-ba5fc6290a1e is in state SUCCESS 2026-03-28 01:19:21.238081 | orchestrator | 2026-03-28 01:19:21.238122 | orchestrator | 2026-03-28 01:19:21.238131 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:19:21.238137 | orchestrator | 2026-03-28 01:19:21.238142 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:19:21.238148 | orchestrator | Saturday 28 March 2026 01:14:18 +0000 (0:00:00.314) 0:00:00.314 ******** 2026-03-28 01:19:21.238152 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:19:21.238158 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:19:21.238162 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:19:21.238166 | orchestrator | 2026-03-28 01:19:21.238170 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:19:21.238174 | orchestrator | Saturday 28 March 2026 01:14:18 +0000 (0:00:00.418) 0:00:00.733 ******** 2026-03-28 01:19:21.238178 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-28 01:19:21.238236 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-28 01:19:21.238240 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-28 01:19:21.238244 | orchestrator | 2026-03-28 01:19:21.238248 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-28 01:19:21.238252 | orchestrator | 2026-03-28 01:19:21.238256 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:19:21.238260 | orchestrator | Saturday 28 March 2026 01:14:19 +0000 (0:00:00.719) 0:00:01.452 ******** 2026-03-28 01:19:21.238285 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:19:21.238291 | orchestrator | 2026-03-28 01:19:21.238295 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-28 01:19:21.238299 | orchestrator | Saturday 28 March 2026 01:14:19 +0000 (0:00:00.631) 0:00:02.084 ******** 2026-03-28 01:19:21.238303 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-28 01:19:21.238307 | orchestrator | 2026-03-28 01:19:21.238311 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-28 01:19:21.238315 | orchestrator | Saturday 28 March 2026 01:14:23 +0000 (0:00:03.898) 0:00:05.983 ******** 2026-03-28 01:19:21.238318 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-28 01:19:21.238323 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-28 01:19:21.238327 | orchestrator | 2026-03-28 01:19:21.238333 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-28 01:19:21.238339 | orchestrator | Saturday 28 March 2026 01:14:30 +0000 (0:00:06.861) 0:00:12.844 ******** 2026-03-28 01:19:21.238346 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:19:21.238353 | orchestrator | 2026-03-28 01:19:21.238375 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-28 01:19:21.238381 | orchestrator | Saturday 28 March 2026 01:14:34 +0000 (0:00:03.607) 0:00:16.452 ******** 2026-03-28 01:19:21.238386 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:19:21.238390 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-28 01:19:21.238394 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-28 01:19:21.238398 | orchestrator | 2026-03-28 01:19:21.238640 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-28 01:19:21.238646 | orchestrator | Saturday 28 March 2026 01:14:42 +0000 (0:00:08.318) 0:00:24.770 ******** 2026-03-28 01:19:21.238650 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:19:21.238654 | orchestrator | 2026-03-28 01:19:21.238658 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-28 01:19:21.238664 | orchestrator | Saturday 28 March 2026 01:14:46 +0000 (0:00:03.546) 0:00:28.316 ******** 2026-03-28 01:19:21.238670 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-28 01:19:21.238675 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-28 01:19:21.238681 | orchestrator | 2026-03-28 01:19:21.238687 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-28 01:19:21.238693 | orchestrator | Saturday 28 March 2026 01:14:53 +0000 (0:00:07.519) 0:00:35.836 ******** 2026-03-28 01:19:21.238820 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-28 01:19:21.238829 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-28 01:19:21.238834 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-28 01:19:21.238837 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-28 01:19:21.238841 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-28 01:19:21.238845 | orchestrator | 2026-03-28 01:19:21.238849 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:19:21.238853 | orchestrator | Saturday 28 March 2026 01:15:10 +0000 (0:00:16.569) 0:00:52.406 ******** 2026-03-28 01:19:21.238883 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:19:21.238888 | orchestrator | 2026-03-28 01:19:21.238892 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-28 01:19:21.238896 | orchestrator | Saturday 28 March 2026 01:15:10 +0000 (0:00:00.668) 0:00:53.074 ******** 2026-03-28 01:19:21.238908 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.238912 | orchestrator | 2026-03-28 01:19:21.238916 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-28 01:19:21.238920 | orchestrator | Saturday 28 March 2026 01:15:16 +0000 (0:00:05.848) 0:00:58.923 ******** 2026-03-28 01:19:21.238923 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.238927 | orchestrator | 2026-03-28 01:19:21.238931 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-28 01:19:21.238954 | orchestrator | Saturday 28 March 2026 01:15:21 +0000 (0:00:04.976) 0:01:03.899 ******** 2026-03-28 01:19:21.238959 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:19:21.238963 | orchestrator | 2026-03-28 01:19:21.238967 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-28 01:19:21.238971 | orchestrator | Saturday 28 March 2026 01:15:25 +0000 (0:00:03.514) 0:01:07.413 ******** 2026-03-28 01:19:21.238974 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-28 01:19:21.238978 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-28 01:19:21.238982 | orchestrator | 2026-03-28 01:19:21.238986 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-28 01:19:21.238990 | orchestrator | Saturday 28 March 2026 01:15:35 +0000 (0:00:10.110) 0:01:17.524 ******** 2026-03-28 01:19:21.238994 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-28 01:19:21.238998 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-28 01:19:21.239004 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-28 01:19:21.239009 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-28 01:19:21.239013 | orchestrator | 2026-03-28 01:19:21.239017 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-28 01:19:21.239021 | orchestrator | Saturday 28 March 2026 01:15:54 +0000 (0:00:18.817) 0:01:36.341 ******** 2026-03-28 01:19:21.239024 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.239028 | orchestrator | 2026-03-28 01:19:21.239032 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-28 01:19:21.239036 | orchestrator | Saturday 28 March 2026 01:15:58 +0000 (0:00:04.578) 0:01:40.920 ******** 2026-03-28 01:19:21.239039 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.239043 | orchestrator | 2026-03-28 01:19:21.239047 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-28 01:19:21.239051 | orchestrator | Saturday 28 March 2026 01:16:02 +0000 (0:00:04.147) 0:01:45.067 ******** 2026-03-28 01:19:21.239054 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:19:21.239058 | orchestrator | 2026-03-28 01:19:21.239062 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-28 01:19:21.239066 | orchestrator | Saturday 28 March 2026 01:16:03 +0000 (0:00:00.264) 0:01:45.331 ******** 2026-03-28 01:19:21.239069 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:19:21.239073 | orchestrator | 2026-03-28 01:19:21.239082 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:19:21.239086 | orchestrator | Saturday 28 March 2026 01:16:06 +0000 (0:00:03.331) 0:01:48.663 ******** 2026-03-28 01:19:21.239090 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:19:21.239094 | orchestrator | 2026-03-28 01:19:21.239098 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-28 01:19:21.239101 | orchestrator | Saturday 28 March 2026 01:16:07 +0000 (0:00:01.370) 0:01:50.033 ******** 2026-03-28 01:19:21.239105 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.239113 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:19:21.239117 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:19:21.239121 | orchestrator | 2026-03-28 01:19:21.239125 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-28 01:19:21.239128 | orchestrator | Saturday 28 March 2026 01:16:13 +0000 (0:00:06.122) 0:01:56.155 ******** 2026-03-28 01:19:21.239132 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:19:21.239136 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:19:21.239139 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.239143 | orchestrator | 2026-03-28 01:19:21.239147 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-28 01:19:21.239151 | orchestrator | Saturday 28 March 2026 01:16:18 +0000 (0:00:04.694) 0:02:00.850 ******** 2026-03-28 01:19:21.239154 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.239158 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:19:21.239162 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:19:21.239165 | orchestrator | 2026-03-28 01:19:21.239169 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-28 01:19:21.239173 | orchestrator | Saturday 28 March 2026 01:16:19 +0000 (0:00:00.707) 0:02:01.558 ******** 2026-03-28 01:19:21.239177 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:19:21.239180 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:19:21.239184 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:19:21.239188 | orchestrator | 2026-03-28 01:19:21.239192 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-28 01:19:21.239195 | orchestrator | Saturday 28 March 2026 01:16:21 +0000 (0:00:01.755) 0:02:03.313 ******** 2026-03-28 01:19:21.239199 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.239203 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:19:21.239207 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:19:21.239210 | orchestrator | 2026-03-28 01:19:21.239214 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-28 01:19:21.239218 | orchestrator | Saturday 28 March 2026 01:16:22 +0000 (0:00:01.115) 0:02:04.428 ******** 2026-03-28 01:19:21.239222 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.239225 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:19:21.239229 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:19:21.239233 | orchestrator | 2026-03-28 01:19:21.239236 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-28 01:19:21.239240 | orchestrator | Saturday 28 March 2026 01:16:23 +0000 (0:00:01.099) 0:02:05.528 ******** 2026-03-28 01:19:21.239244 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:19:21.239248 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:19:21.239251 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.239255 | orchestrator | 2026-03-28 01:19:21.239270 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-28 01:19:21.239275 | orchestrator | Saturday 28 March 2026 01:16:25 +0000 (0:00:01.669) 0:02:07.198 ******** 2026-03-28 01:19:21.239279 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.239282 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:19:21.239286 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:19:21.239290 | orchestrator | 2026-03-28 01:19:21.239294 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-28 01:19:21.239297 | orchestrator | Saturday 28 March 2026 01:16:26 +0000 (0:00:01.564) 0:02:08.762 ******** 2026-03-28 01:19:21.239301 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:19:21.239305 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:19:21.239309 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:19:21.239313 | orchestrator | 2026-03-28 01:19:21.239316 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-28 01:19:21.239320 | orchestrator | Saturday 28 March 2026 01:16:27 +0000 (0:00:00.625) 0:02:09.388 ******** 2026-03-28 01:19:21.239324 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:19:21.239327 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:19:21.239331 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:19:21.239338 | orchestrator | 2026-03-28 01:19:21.239342 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:19:21.239346 | orchestrator | Saturday 28 March 2026 01:16:29 +0000 (0:00:02.460) 0:02:11.849 ******** 2026-03-28 01:19:21.239350 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:19:21.239354 | orchestrator | 2026-03-28 01:19:21.239357 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-28 01:19:21.239361 | orchestrator | Saturday 28 March 2026 01:16:30 +0000 (0:00:00.769) 0:02:12.618 ******** 2026-03-28 01:19:21.239365 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:19:21.239369 | orchestrator | 2026-03-28 01:19:21.239372 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-28 01:19:21.239376 | orchestrator | Saturday 28 March 2026 01:16:34 +0000 (0:00:04.391) 0:02:17.010 ******** 2026-03-28 01:19:21.239380 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:19:21.239384 | orchestrator | 2026-03-28 01:19:21.239387 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-28 01:19:21.239391 | orchestrator | Saturday 28 March 2026 01:16:38 +0000 (0:00:03.319) 0:02:20.329 ******** 2026-03-28 01:19:21.239395 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-28 01:19:21.239399 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-28 01:19:21.239403 | orchestrator | 2026-03-28 01:19:21.239407 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-28 01:19:21.239410 | orchestrator | Saturday 28 March 2026 01:16:45 +0000 (0:00:07.136) 0:02:27.466 ******** 2026-03-28 01:19:21.239417 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:19:21.239421 | orchestrator | 2026-03-28 01:19:21.239425 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-28 01:19:21.239429 | orchestrator | Saturday 28 March 2026 01:16:48 +0000 (0:00:03.456) 0:02:30.922 ******** 2026-03-28 01:19:21.239432 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:19:21.239436 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:19:21.239440 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:19:21.239444 | orchestrator | 2026-03-28 01:19:21.239447 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-28 01:19:21.239451 | orchestrator | Saturday 28 March 2026 01:16:49 +0000 (0:00:00.388) 0:02:31.311 ******** 2026-03-28 01:19:21.239458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.239479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.239488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.239494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.239503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.239507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.239513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239591 | orchestrator | 2026-03-28 01:19:21.239595 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-28 01:19:21.239600 | orchestrator | Saturday 28 March 2026 01:16:51 +0000 (0:00:02.534) 0:02:33.846 ******** 2026-03-28 01:19:21.239604 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:19:21.239609 | orchestrator | 2026-03-28 01:19:21.239613 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-28 01:19:21.239617 | orchestrator | Saturday 28 March 2026 01:16:51 +0000 (0:00:00.150) 0:02:33.996 ******** 2026-03-28 01:19:21.239622 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:19:21.239626 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:19:21.239630 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:19:21.239634 | orchestrator | 2026-03-28 01:19:21.239637 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-28 01:19:21.239641 | orchestrator | Saturday 28 March 2026 01:16:52 +0000 (0:00:00.539) 0:02:34.535 ******** 2026-03-28 01:19:21.239645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:19:21.239652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:19:21.239656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.239660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.239667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:19:21.239671 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:19:21.239687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:19:21.239692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:19:21.239704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.239710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.239716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:19:21.239727 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:19:21.239751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:19:21.239758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:19:21.239765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.239775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.239782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:19:21.239788 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:19:21.239796 | orchestrator | 2026-03-28 01:19:21.239800 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:19:21.239803 | orchestrator | Saturday 28 March 2026 01:16:53 +0000 (0:00:00.774) 0:02:35.309 ******** 2026-03-28 01:19:21.239807 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:19:21.239811 | orchestrator | 2026-03-28 01:19:21.239815 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-28 01:19:21.239819 | orchestrator | Saturday 28 March 2026 01:16:53 +0000 (0:00:00.572) 0:02:35.882 ******** 2026-03-28 01:19:21.239823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.239839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.239844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.239851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.239874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.239879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.239883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.239935 | orchestrator | 2026-03-28 01:19:21.239939 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-28 01:19:21.239943 | orchestrator | Saturday 28 March 2026 01:16:59 +0000 (0:00:05.758) 0:02:41.640 ******** 2026-03-28 01:19:21.239949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:19:21.239956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:19:21.239960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.239964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.239971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:19:21.239975 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:19:21.239979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:19:21.239982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:19:21.239994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.239998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.240002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:19:21.240006 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:19:21.240012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:19:21.240017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:19:21.240021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.240031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.240035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:19:21.240039 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:19:21.240043 | orchestrator | 2026-03-28 01:19:21.240046 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-28 01:19:21.240050 | orchestrator | Saturday 28 March 2026 01:17:00 +0000 (0:00:00.738) 0:02:42.378 ******** 2026-03-28 01:19:21.240054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:19:21.240066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:19:21.240072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.240078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.240095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:19:21.240102 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:19:21.240109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:19:21.240116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:19:21.240128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.240135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.240142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:19:21.240154 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:19:21.240163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:19:21.240167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:19:21.240171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.240182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:19:21.240186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:19:21.240193 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:19:21.240197 | orchestrator | 2026-03-28 01:19:21.240200 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-28 01:19:21.240204 | orchestrator | Saturday 28 March 2026 01:17:01 +0000 (0:00:00.928) 0:02:43.306 ******** 2026-03-28 01:19:21.240211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.240215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.240219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.240225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.240229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.240237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.240244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240308 | orchestrator | 2026-03-28 01:19:21.240311 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-28 01:19:21.240315 | orchestrator | Saturday 28 March 2026 01:17:06 +0000 (0:00:05.554) 0:02:48.861 ******** 2026-03-28 01:19:21.240319 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-28 01:19:21.240323 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-28 01:19:21.240327 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-28 01:19:21.240331 | orchestrator | 2026-03-28 01:19:21.240335 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-28 01:19:21.240338 | orchestrator | Saturday 28 March 2026 01:17:08 +0000 (0:00:02.004) 0:02:50.865 ******** 2026-03-28 01:19:21.240346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.240355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.240362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.240366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.240370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.240374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.240380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240433 | orchestrator | 2026-03-28 01:19:21.240437 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-28 01:19:21.240441 | orchestrator | Saturday 28 March 2026 01:17:29 +0000 (0:00:20.986) 0:03:11.851 ******** 2026-03-28 01:19:21.240444 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.240448 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:19:21.240452 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:19:21.240456 | orchestrator | 2026-03-28 01:19:21.240459 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-28 01:19:21.240463 | orchestrator | Saturday 28 March 2026 01:17:31 +0000 (0:00:01.571) 0:03:13.423 ******** 2026-03-28 01:19:21.240467 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-28 01:19:21.240471 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-28 01:19:21.240474 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-28 01:19:21.240481 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-28 01:19:21.240485 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-28 01:19:21.240489 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-28 01:19:21.240492 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-28 01:19:21.240496 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-28 01:19:21.240500 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-28 01:19:21.240504 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-28 01:19:21.240507 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-28 01:19:21.240511 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-28 01:19:21.240515 | orchestrator | 2026-03-28 01:19:21.240519 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-28 01:19:21.240523 | orchestrator | Saturday 28 March 2026 01:17:37 +0000 (0:00:05.793) 0:03:19.217 ******** 2026-03-28 01:19:21.240526 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-28 01:19:21.240530 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-28 01:19:21.240534 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-28 01:19:21.240538 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-28 01:19:21.240546 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-28 01:19:21.240550 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-28 01:19:21.240554 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-28 01:19:21.240558 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-28 01:19:21.240561 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-28 01:19:21.240565 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-28 01:19:21.240569 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-28 01:19:21.240573 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-28 01:19:21.240576 | orchestrator | 2026-03-28 01:19:21.240581 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-28 01:19:21.240587 | orchestrator | Saturday 28 March 2026 01:17:44 +0000 (0:00:07.325) 0:03:26.542 ******** 2026-03-28 01:19:21.240592 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-28 01:19:21.240598 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-28 01:19:21.240604 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-28 01:19:21.240610 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-28 01:19:21.240616 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-28 01:19:21.240622 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-28 01:19:21.240628 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-28 01:19:21.240635 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-28 01:19:21.240642 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-28 01:19:21.240646 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-28 01:19:21.240649 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-28 01:19:21.240653 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-28 01:19:21.240657 | orchestrator | 2026-03-28 01:19:21.240661 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-28 01:19:21.240665 | orchestrator | Saturday 28 March 2026 01:17:49 +0000 (0:00:05.384) 0:03:31.927 ******** 2026-03-28 01:19:21.240669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.240676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.240685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:19:21.240689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.240695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.240699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:19:21.240703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:19:21.240783 | orchestrator | 2026-03-28 01:19:21.240789 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:19:21.240795 | orchestrator | Saturday 28 March 2026 01:17:53 +0000 (0:00:03.792) 0:03:35.719 ******** 2026-03-28 01:19:21.240801 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:19:21.240805 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:19:21.240809 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:19:21.240813 | orchestrator | 2026-03-28 01:19:21.240816 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-28 01:19:21.240820 | orchestrator | Saturday 28 March 2026 01:17:53 +0000 (0:00:00.330) 0:03:36.049 ******** 2026-03-28 01:19:21.240824 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.240828 | orchestrator | 2026-03-28 01:19:21.240832 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-28 01:19:21.240836 | orchestrator | Saturday 28 March 2026 01:17:56 +0000 (0:00:02.281) 0:03:38.331 ******** 2026-03-28 01:19:21.240839 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.240843 | orchestrator | 2026-03-28 01:19:21.240847 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-28 01:19:21.240851 | orchestrator | Saturday 28 March 2026 01:17:58 +0000 (0:00:02.293) 0:03:40.624 ******** 2026-03-28 01:19:21.240854 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.240910 | orchestrator | 2026-03-28 01:19:21.240914 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-28 01:19:21.240918 | orchestrator | Saturday 28 March 2026 01:18:00 +0000 (0:00:02.435) 0:03:43.059 ******** 2026-03-28 01:19:21.240922 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.240926 | orchestrator | 2026-03-28 01:19:21.240930 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-28 01:19:21.240934 | orchestrator | Saturday 28 March 2026 01:18:04 +0000 (0:00:03.719) 0:03:46.779 ******** 2026-03-28 01:19:21.240937 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.240941 | orchestrator | 2026-03-28 01:19:21.240945 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-28 01:19:21.240949 | orchestrator | Saturday 28 March 2026 01:18:28 +0000 (0:00:24.304) 0:04:11.083 ******** 2026-03-28 01:19:21.240953 | orchestrator | 2026-03-28 01:19:21.240956 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-28 01:19:21.240960 | orchestrator | Saturday 28 March 2026 01:18:28 +0000 (0:00:00.075) 0:04:11.159 ******** 2026-03-28 01:19:21.240964 | orchestrator | 2026-03-28 01:19:21.240968 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-28 01:19:21.240971 | orchestrator | Saturday 28 March 2026 01:18:29 +0000 (0:00:00.112) 0:04:11.272 ******** 2026-03-28 01:19:21.240975 | orchestrator | 2026-03-28 01:19:21.240979 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-28 01:19:21.240986 | orchestrator | Saturday 28 March 2026 01:18:29 +0000 (0:00:00.090) 0:04:11.362 ******** 2026-03-28 01:19:21.240990 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.240994 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:19:21.240998 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:19:21.241001 | orchestrator | 2026-03-28 01:19:21.241005 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-28 01:19:21.241009 | orchestrator | Saturday 28 March 2026 01:18:45 +0000 (0:00:15.998) 0:04:27.360 ******** 2026-03-28 01:19:21.241013 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.241022 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:19:21.241025 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:19:21.241029 | orchestrator | 2026-03-28 01:19:21.241033 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-28 01:19:21.241037 | orchestrator | Saturday 28 March 2026 01:18:51 +0000 (0:00:06.756) 0:04:34.117 ******** 2026-03-28 01:19:21.241040 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.241044 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:19:21.241048 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:19:21.241052 | orchestrator | 2026-03-28 01:19:21.241055 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-28 01:19:21.241059 | orchestrator | Saturday 28 March 2026 01:18:58 +0000 (0:00:06.096) 0:04:40.214 ******** 2026-03-28 01:19:21.241063 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.241067 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:19:21.241071 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:19:21.241074 | orchestrator | 2026-03-28 01:19:21.241078 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-28 01:19:21.241082 | orchestrator | Saturday 28 March 2026 01:19:08 +0000 (0:00:10.514) 0:04:50.729 ******** 2026-03-28 01:19:21.241086 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:19:21.241089 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:19:21.241093 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:19:21.241097 | orchestrator | 2026-03-28 01:19:21.241101 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:19:21.241105 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:19:21.241109 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:19:21.241113 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:19:21.241117 | orchestrator | 2026-03-28 01:19:21.241121 | orchestrator | 2026-03-28 01:19:21.241128 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:19:21.241132 | orchestrator | Saturday 28 March 2026 01:19:19 +0000 (0:00:10.657) 0:05:01.386 ******** 2026-03-28 01:19:21.241136 | orchestrator | =============================================================================== 2026-03-28 01:19:21.241139 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 24.30s 2026-03-28 01:19:21.241143 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 20.99s 2026-03-28 01:19:21.241147 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.82s 2026-03-28 01:19:21.241151 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.57s 2026-03-28 01:19:21.241154 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.00s 2026-03-28 01:19:21.241158 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.66s 2026-03-28 01:19:21.241162 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.51s 2026-03-28 01:19:21.241166 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.11s 2026-03-28 01:19:21.241169 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.32s 2026-03-28 01:19:21.241173 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.52s 2026-03-28 01:19:21.241177 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 7.33s 2026-03-28 01:19:21.241181 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.14s 2026-03-28 01:19:21.241185 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.86s 2026-03-28 01:19:21.241188 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.76s 2026-03-28 01:19:21.241195 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.12s 2026-03-28 01:19:21.241199 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 6.10s 2026-03-28 01:19:21.241203 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.85s 2026-03-28 01:19:21.241207 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.79s 2026-03-28 01:19:21.241211 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.76s 2026-03-28 01:19:21.241214 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.55s 2026-03-28 01:19:21.241218 | orchestrator | 2026-03-28 01:19:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:24.279400 | orchestrator | 2026-03-28 01:19:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:27.325977 | orchestrator | 2026-03-28 01:19:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:30.378390 | orchestrator | 2026-03-28 01:19:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:33.422159 | orchestrator | 2026-03-28 01:19:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:36.467453 | orchestrator | 2026-03-28 01:19:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:39.514793 | orchestrator | 2026-03-28 01:19:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:42.559014 | orchestrator | 2026-03-28 01:19:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:45.603319 | orchestrator | 2026-03-28 01:19:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:48.644365 | orchestrator | 2026-03-28 01:19:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:51.686913 | orchestrator | 2026-03-28 01:19:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:54.739160 | orchestrator | 2026-03-28 01:19:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:57.775001 | orchestrator | 2026-03-28 01:19:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:20:00.822256 | orchestrator | 2026-03-28 01:20:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:20:03.864443 | orchestrator | 2026-03-28 01:20:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:20:06.912505 | orchestrator | 2026-03-28 01:20:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:20:09.952716 | orchestrator | 2026-03-28 01:20:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:20:12.989011 | orchestrator | 2026-03-28 01:20:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:20:16.040419 | orchestrator | 2026-03-28 01:20:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:20:19.079152 | orchestrator | 2026-03-28 01:20:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:20:22.123194 | orchestrator | 2026-03-28 01:20:22.457890 | orchestrator | 2026-03-28 01:20:22.464323 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Mar 28 01:20:22 UTC 2026 2026-03-28 01:20:22.466408 | orchestrator | 2026-03-28 01:20:22.845301 | orchestrator | ok: Runtime: 0:38:09.865961 2026-03-28 01:20:23.142166 | 2026-03-28 01:20:23.142317 | TASK [Bootstrap services] 2026-03-28 01:20:23.910400 | orchestrator | 2026-03-28 01:20:23.910581 | orchestrator | # BOOTSTRAP 2026-03-28 01:20:23.910600 | orchestrator | 2026-03-28 01:20:23.910612 | orchestrator | + set -e 2026-03-28 01:20:23.910623 | orchestrator | + echo 2026-03-28 01:20:23.910634 | orchestrator | + echo '# BOOTSTRAP' 2026-03-28 01:20:23.910649 | orchestrator | + echo 2026-03-28 01:20:23.910685 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-28 01:20:23.922494 | orchestrator | + set -e 2026-03-28 01:20:23.922618 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-28 01:20:29.772127 | orchestrator | 2026-03-28 01:20:29 | INFO  | It takes a moment until task f0af0f90-4f9c-447c-9858-af487d313d6c (flavor-manager) has been started and output is visible here. 2026-03-28 01:20:38.281304 | orchestrator | 2026-03-28 01:20:33 | INFO  | Flavor SCS-1L-1 created 2026-03-28 01:20:38.281471 | orchestrator | 2026-03-28 01:20:33 | INFO  | Flavor SCS-1L-1-5 created 2026-03-28 01:20:38.281494 | orchestrator | 2026-03-28 01:20:33 | INFO  | Flavor SCS-1V-2 created 2026-03-28 01:20:38.281505 | orchestrator | 2026-03-28 01:20:34 | INFO  | Flavor SCS-1V-2-5 created 2026-03-28 01:20:38.281515 | orchestrator | 2026-03-28 01:20:34 | INFO  | Flavor SCS-1V-4 created 2026-03-28 01:20:38.281525 | orchestrator | 2026-03-28 01:20:34 | INFO  | Flavor SCS-1V-4-10 created 2026-03-28 01:20:38.281535 | orchestrator | 2026-03-28 01:20:34 | INFO  | Flavor SCS-1V-8 created 2026-03-28 01:20:38.281547 | orchestrator | 2026-03-28 01:20:34 | INFO  | Flavor SCS-1V-8-20 created 2026-03-28 01:20:38.281581 | orchestrator | 2026-03-28 01:20:35 | INFO  | Flavor SCS-2V-4 created 2026-03-28 01:20:38.281591 | orchestrator | 2026-03-28 01:20:35 | INFO  | Flavor SCS-2V-4-10 created 2026-03-28 01:20:38.281601 | orchestrator | 2026-03-28 01:20:35 | INFO  | Flavor SCS-2V-8 created 2026-03-28 01:20:38.281610 | orchestrator | 2026-03-28 01:20:35 | INFO  | Flavor SCS-2V-8-20 created 2026-03-28 01:20:38.281620 | orchestrator | 2026-03-28 01:20:35 | INFO  | Flavor SCS-2V-16 created 2026-03-28 01:20:38.281630 | orchestrator | 2026-03-28 01:20:35 | INFO  | Flavor SCS-2V-16-50 created 2026-03-28 01:20:38.281639 | orchestrator | 2026-03-28 01:20:36 | INFO  | Flavor SCS-4V-8 created 2026-03-28 01:20:38.281650 | orchestrator | 2026-03-28 01:20:36 | INFO  | Flavor SCS-4V-8-20 created 2026-03-28 01:20:38.281666 | orchestrator | 2026-03-28 01:20:36 | INFO  | Flavor SCS-4V-16 created 2026-03-28 01:20:38.281689 | orchestrator | 2026-03-28 01:20:36 | INFO  | Flavor SCS-4V-16-50 created 2026-03-28 01:20:38.281708 | orchestrator | 2026-03-28 01:20:36 | INFO  | Flavor SCS-4V-32 created 2026-03-28 01:20:38.281722 | orchestrator | 2026-03-28 01:20:36 | INFO  | Flavor SCS-4V-32-100 created 2026-03-28 01:20:38.281738 | orchestrator | 2026-03-28 01:20:36 | INFO  | Flavor SCS-8V-16 created 2026-03-28 01:20:38.281754 | orchestrator | 2026-03-28 01:20:37 | INFO  | Flavor SCS-8V-16-50 created 2026-03-28 01:20:38.281770 | orchestrator | 2026-03-28 01:20:37 | INFO  | Flavor SCS-8V-32 created 2026-03-28 01:20:38.281784 | orchestrator | 2026-03-28 01:20:37 | INFO  | Flavor SCS-8V-32-100 created 2026-03-28 01:20:38.281797 | orchestrator | 2026-03-28 01:20:37 | INFO  | Flavor SCS-16V-32 created 2026-03-28 01:20:38.281812 | orchestrator | 2026-03-28 01:20:37 | INFO  | Flavor SCS-16V-32-100 created 2026-03-28 01:20:38.281870 | orchestrator | 2026-03-28 01:20:37 | INFO  | Flavor SCS-2V-4-20s created 2026-03-28 01:20:38.281888 | orchestrator | 2026-03-28 01:20:37 | INFO  | Flavor SCS-4V-8-50s created 2026-03-28 01:20:38.281904 | orchestrator | 2026-03-28 01:20:38 | INFO  | Flavor SCS-8V-32-100s created 2026-03-28 01:20:40.775445 | orchestrator | 2026-03-28 01:20:40 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-28 01:20:40.860479 | orchestrator | 2026-03-28 01:20:40 | INFO  | Task 7480240e-1b3a-460e-9882-81978771ee8e (bootstrap-basic) was prepared for execution. 2026-03-28 01:20:40.860572 | orchestrator | 2026-03-28 01:20:40 | INFO  | It takes a moment until task 7480240e-1b3a-460e-9882-81978771ee8e (bootstrap-basic) has been started and output is visible here. 2026-03-28 01:21:29.230334 | orchestrator | 2026-03-28 01:21:29.230499 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-28 01:21:29.230518 | orchestrator | 2026-03-28 01:21:29.230560 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 01:21:29.230573 | orchestrator | Saturday 28 March 2026 01:20:45 +0000 (0:00:00.083) 0:00:00.083 ******** 2026-03-28 01:21:29.230582 | orchestrator | ok: [localhost] 2026-03-28 01:21:29.230592 | orchestrator | 2026-03-28 01:21:29.230602 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-28 01:21:29.230611 | orchestrator | Saturday 28 March 2026 01:20:47 +0000 (0:00:01.918) 0:00:02.002 ******** 2026-03-28 01:21:29.230620 | orchestrator | ok: [localhost] 2026-03-28 01:21:29.230628 | orchestrator | 2026-03-28 01:21:29.230638 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-28 01:21:29.230646 | orchestrator | Saturday 28 March 2026 01:20:56 +0000 (0:00:09.554) 0:00:11.557 ******** 2026-03-28 01:21:29.230655 | orchestrator | changed: [localhost] 2026-03-28 01:21:29.230665 | orchestrator | 2026-03-28 01:21:29.230673 | orchestrator | TASK [Create public network] *************************************************** 2026-03-28 01:21:29.230682 | orchestrator | Saturday 28 March 2026 01:21:04 +0000 (0:00:07.749) 0:00:19.306 ******** 2026-03-28 01:21:29.230691 | orchestrator | changed: [localhost] 2026-03-28 01:21:29.230700 | orchestrator | 2026-03-28 01:21:29.230725 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-28 01:21:29.230741 | orchestrator | Saturday 28 March 2026 01:21:09 +0000 (0:00:05.290) 0:00:24.597 ******** 2026-03-28 01:21:29.230783 | orchestrator | changed: [localhost] 2026-03-28 01:21:29.230826 | orchestrator | 2026-03-28 01:21:29.230841 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-28 01:21:29.230855 | orchestrator | Saturday 28 March 2026 01:21:16 +0000 (0:00:06.609) 0:00:31.206 ******** 2026-03-28 01:21:29.230869 | orchestrator | changed: [localhost] 2026-03-28 01:21:29.230883 | orchestrator | 2026-03-28 01:21:29.230897 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-28 01:21:29.230910 | orchestrator | Saturday 28 March 2026 01:21:21 +0000 (0:00:04.632) 0:00:35.839 ******** 2026-03-28 01:21:29.230922 | orchestrator | changed: [localhost] 2026-03-28 01:21:29.230935 | orchestrator | 2026-03-28 01:21:29.230950 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-28 01:21:29.230982 | orchestrator | Saturday 28 March 2026 01:21:25 +0000 (0:00:03.980) 0:00:39.819 ******** 2026-03-28 01:21:29.230998 | orchestrator | ok: [localhost] 2026-03-28 01:21:29.231013 | orchestrator | 2026-03-28 01:21:29.231027 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:21:29.231044 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:21:29.231061 | orchestrator | 2026-03-28 01:21:29.231076 | orchestrator | 2026-03-28 01:21:29.231091 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:21:29.231105 | orchestrator | Saturday 28 March 2026 01:21:28 +0000 (0:00:03.736) 0:00:43.555 ******** 2026-03-28 01:21:29.231116 | orchestrator | =============================================================================== 2026-03-28 01:21:29.231130 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.55s 2026-03-28 01:21:29.231145 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.75s 2026-03-28 01:21:29.231160 | orchestrator | Set public network to default ------------------------------------------- 6.61s 2026-03-28 01:21:29.231174 | orchestrator | Create public network --------------------------------------------------- 5.29s 2026-03-28 01:21:29.231241 | orchestrator | Create public subnet ---------------------------------------------------- 4.63s 2026-03-28 01:21:29.231256 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.98s 2026-03-28 01:21:29.231271 | orchestrator | Create manager role ----------------------------------------------------- 3.74s 2026-03-28 01:21:29.231285 | orchestrator | Gathering Facts --------------------------------------------------------- 1.92s 2026-03-28 01:21:31.954545 | orchestrator | 2026-03-28 01:21:31 | INFO  | It takes a moment until task 87a07d7d-56a7-480d-b5c5-2fd525936542 (image-manager) has been started and output is visible here. 2026-03-28 01:22:15.582943 | orchestrator | 2026-03-28 01:21:34 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-28 01:22:15.583054 | orchestrator | 2026-03-28 01:21:35 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-28 01:22:15.583071 | orchestrator | 2026-03-28 01:21:35 | INFO  | Importing image Cirros 0.6.2 2026-03-28 01:22:15.583077 | orchestrator | 2026-03-28 01:21:35 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-28 01:22:15.583083 | orchestrator | 2026-03-28 01:21:37 | INFO  | Waiting for image to leave queued state... 2026-03-28 01:22:15.583089 | orchestrator | 2026-03-28 01:21:41 | INFO  | Waiting for import to complete... 2026-03-28 01:22:15.583094 | orchestrator | 2026-03-28 01:21:51 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-28 01:22:15.583099 | orchestrator | 2026-03-28 01:21:51 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-28 01:22:15.583104 | orchestrator | 2026-03-28 01:21:51 | INFO  | Setting internal_version = 0.6.2 2026-03-28 01:22:15.583109 | orchestrator | 2026-03-28 01:21:51 | INFO  | Setting image_original_user = cirros 2026-03-28 01:22:15.583114 | orchestrator | 2026-03-28 01:21:51 | INFO  | Adding tag os:cirros 2026-03-28 01:22:15.583118 | orchestrator | 2026-03-28 01:21:51 | INFO  | Setting property architecture: x86_64 2026-03-28 01:22:15.583123 | orchestrator | 2026-03-28 01:21:52 | INFO  | Setting property hw_disk_bus: scsi 2026-03-28 01:22:15.583127 | orchestrator | 2026-03-28 01:21:52 | INFO  | Setting property hw_rng_model: virtio 2026-03-28 01:22:15.583132 | orchestrator | 2026-03-28 01:21:52 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-28 01:22:15.583137 | orchestrator | 2026-03-28 01:21:52 | INFO  | Setting property hw_watchdog_action: reset 2026-03-28 01:22:15.583141 | orchestrator | 2026-03-28 01:21:52 | INFO  | Setting property hypervisor_type: qemu 2026-03-28 01:22:15.583146 | orchestrator | 2026-03-28 01:21:53 | INFO  | Setting property os_distro: cirros 2026-03-28 01:22:15.583150 | orchestrator | 2026-03-28 01:21:53 | INFO  | Setting property os_purpose: minimal 2026-03-28 01:22:15.583155 | orchestrator | 2026-03-28 01:21:53 | INFO  | Setting property replace_frequency: never 2026-03-28 01:22:15.583159 | orchestrator | 2026-03-28 01:21:53 | INFO  | Setting property uuid_validity: none 2026-03-28 01:22:15.583163 | orchestrator | 2026-03-28 01:21:54 | INFO  | Setting property provided_until: none 2026-03-28 01:22:15.583168 | orchestrator | 2026-03-28 01:21:54 | INFO  | Setting property image_description: Cirros 2026-03-28 01:22:15.583172 | orchestrator | 2026-03-28 01:21:54 | INFO  | Setting property image_name: Cirros 2026-03-28 01:22:15.583176 | orchestrator | 2026-03-28 01:21:54 | INFO  | Setting property internal_version: 0.6.2 2026-03-28 01:22:15.583181 | orchestrator | 2026-03-28 01:21:55 | INFO  | Setting property image_original_user: cirros 2026-03-28 01:22:15.583201 | orchestrator | 2026-03-28 01:21:55 | INFO  | Setting property os_version: 0.6.2 2026-03-28 01:22:15.583216 | orchestrator | 2026-03-28 01:21:55 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-28 01:22:15.583222 | orchestrator | 2026-03-28 01:21:55 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-28 01:22:15.583226 | orchestrator | 2026-03-28 01:21:56 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-28 01:22:15.583231 | orchestrator | 2026-03-28 01:21:56 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-28 01:22:15.583235 | orchestrator | 2026-03-28 01:21:56 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-28 01:22:15.583239 | orchestrator | 2026-03-28 01:21:56 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-28 01:22:15.583247 | orchestrator | 2026-03-28 01:21:56 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-28 01:22:15.583251 | orchestrator | 2026-03-28 01:21:56 | INFO  | Importing image Cirros 0.6.3 2026-03-28 01:22:15.583255 | orchestrator | 2026-03-28 01:21:56 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-28 01:22:15.583260 | orchestrator | 2026-03-28 01:21:58 | INFO  | Waiting for import to complete... 2026-03-28 01:22:15.583264 | orchestrator | 2026-03-28 01:22:09 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-28 01:22:15.583283 | orchestrator | 2026-03-28 01:22:09 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-28 01:22:15.583288 | orchestrator | 2026-03-28 01:22:09 | INFO  | Setting internal_version = 0.6.3 2026-03-28 01:22:15.583292 | orchestrator | 2026-03-28 01:22:09 | INFO  | Setting image_original_user = cirros 2026-03-28 01:22:15.583296 | orchestrator | 2026-03-28 01:22:09 | INFO  | Adding tag os:cirros 2026-03-28 01:22:15.583301 | orchestrator | 2026-03-28 01:22:09 | INFO  | Setting property architecture: x86_64 2026-03-28 01:22:15.583305 | orchestrator | 2026-03-28 01:22:10 | INFO  | Setting property hw_disk_bus: scsi 2026-03-28 01:22:15.583309 | orchestrator | 2026-03-28 01:22:10 | INFO  | Setting property hw_rng_model: virtio 2026-03-28 01:22:15.583314 | orchestrator | 2026-03-28 01:22:10 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-28 01:22:15.583318 | orchestrator | 2026-03-28 01:22:11 | INFO  | Setting property hw_watchdog_action: reset 2026-03-28 01:22:15.583322 | orchestrator | 2026-03-28 01:22:11 | INFO  | Setting property hypervisor_type: qemu 2026-03-28 01:22:15.583327 | orchestrator | 2026-03-28 01:22:11 | INFO  | Setting property os_distro: cirros 2026-03-28 01:22:15.583331 | orchestrator | 2026-03-28 01:22:11 | INFO  | Setting property os_purpose: minimal 2026-03-28 01:22:15.583338 | orchestrator | 2026-03-28 01:22:12 | INFO  | Setting property replace_frequency: never 2026-03-28 01:22:15.583345 | orchestrator | 2026-03-28 01:22:12 | INFO  | Setting property uuid_validity: none 2026-03-28 01:22:15.583352 | orchestrator | 2026-03-28 01:22:12 | INFO  | Setting property provided_until: none 2026-03-28 01:22:15.583359 | orchestrator | 2026-03-28 01:22:12 | INFO  | Setting property image_description: Cirros 2026-03-28 01:22:15.583366 | orchestrator | 2026-03-28 01:22:13 | INFO  | Setting property image_name: Cirros 2026-03-28 01:22:15.583373 | orchestrator | 2026-03-28 01:22:13 | INFO  | Setting property internal_version: 0.6.3 2026-03-28 01:22:15.583379 | orchestrator | 2026-03-28 01:22:13 | INFO  | Setting property image_original_user: cirros 2026-03-28 01:22:15.583393 | orchestrator | 2026-03-28 01:22:13 | INFO  | Setting property os_version: 0.6.3 2026-03-28 01:22:15.583400 | orchestrator | 2026-03-28 01:22:14 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-28 01:22:15.583407 | orchestrator | 2026-03-28 01:22:14 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-28 01:22:15.583414 | orchestrator | 2026-03-28 01:22:14 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-28 01:22:15.583420 | orchestrator | 2026-03-28 01:22:14 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-28 01:22:15.583427 | orchestrator | 2026-03-28 01:22:14 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-28 01:22:16.006585 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-28 01:22:18.404578 | orchestrator | 2026-03-28 01:22:18 | INFO  | date: 2026-03-27 2026-03-28 01:22:18.404646 | orchestrator | 2026-03-28 01:22:18 | INFO  | image: octavia-amphora-haproxy-2024.2.20260327.qcow2 2026-03-28 01:22:18.407314 | orchestrator | 2026-03-28 01:22:18 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260327.qcow2 2026-03-28 01:22:18.407336 | orchestrator | 2026-03-28 01:22:18 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260327.qcow2.CHECKSUM 2026-03-28 01:22:18.530163 | orchestrator | 2026-03-28 01:22:18 | INFO  | checksum: 0ed5f2f3e98ff1ae58214ab379bdaeed446d1947343245e229797cec0b1222d6 2026-03-28 01:22:18.632027 | orchestrator | 2026-03-28 01:22:18 | INFO  | It takes a moment until task 3617bfbc-43c4-40a3-8212-b6e02e1a1c5b (image-manager) has been started and output is visible here. 2026-03-28 01:23:31.514385 | orchestrator | 2026-03-28 01:22:20 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-27' 2026-03-28 01:23:31.514502 | orchestrator | 2026-03-28 01:22:21 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260327.qcow2: 200 2026-03-28 01:23:31.514520 | orchestrator | 2026-03-28 01:22:21 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-27 2026-03-28 01:23:31.514529 | orchestrator | 2026-03-28 01:22:21 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260327.qcow2 2026-03-28 01:23:31.514539 | orchestrator | 2026-03-28 01:22:22 | INFO  | Waiting for image to leave queued state... 2026-03-28 01:23:31.514549 | orchestrator | 2026-03-28 01:22:24 | INFO  | Waiting for import to complete... 2026-03-28 01:23:31.514558 | orchestrator | 2026-03-28 01:22:34 | INFO  | Waiting for import to complete... 2026-03-28 01:23:31.514567 | orchestrator | 2026-03-28 01:22:45 | INFO  | Waiting for import to complete... 2026-03-28 01:23:31.514576 | orchestrator | 2026-03-28 01:22:55 | INFO  | Waiting for import to complete... 2026-03-28 01:23:31.514587 | orchestrator | 2026-03-28 01:23:05 | INFO  | Waiting for import to complete... 2026-03-28 01:23:31.514593 | orchestrator | 2026-03-28 01:23:15 | INFO  | Waiting for import to complete... 2026-03-28 01:23:31.514599 | orchestrator | 2026-03-28 01:23:25 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-27' successfully completed, reloading images 2026-03-28 01:23:31.514606 | orchestrator | 2026-03-28 01:23:26 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-27' 2026-03-28 01:23:31.514611 | orchestrator | 2026-03-28 01:23:26 | INFO  | Setting internal_version = 2026-03-27 2026-03-28 01:23:31.514638 | orchestrator | 2026-03-28 01:23:26 | INFO  | Setting image_original_user = ubuntu 2026-03-28 01:23:31.514644 | orchestrator | 2026-03-28 01:23:26 | INFO  | Adding tag amphora 2026-03-28 01:23:31.514649 | orchestrator | 2026-03-28 01:23:26 | INFO  | Adding tag os:ubuntu 2026-03-28 01:23:31.514655 | orchestrator | 2026-03-28 01:23:26 | INFO  | Setting property architecture: x86_64 2026-03-28 01:23:31.514660 | orchestrator | 2026-03-28 01:23:26 | INFO  | Setting property hw_disk_bus: scsi 2026-03-28 01:23:31.514665 | orchestrator | 2026-03-28 01:23:26 | INFO  | Setting property hw_rng_model: virtio 2026-03-28 01:23:31.514670 | orchestrator | 2026-03-28 01:23:27 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-28 01:23:31.514675 | orchestrator | 2026-03-28 01:23:27 | INFO  | Setting property hw_watchdog_action: reset 2026-03-28 01:23:31.514680 | orchestrator | 2026-03-28 01:23:27 | INFO  | Setting property hypervisor_type: qemu 2026-03-28 01:23:31.514685 | orchestrator | 2026-03-28 01:23:27 | INFO  | Setting property os_distro: ubuntu 2026-03-28 01:23:31.514690 | orchestrator | 2026-03-28 01:23:28 | INFO  | Setting property replace_frequency: quarterly 2026-03-28 01:23:31.514695 | orchestrator | 2026-03-28 01:23:28 | INFO  | Setting property uuid_validity: last-1 2026-03-28 01:23:31.514700 | orchestrator | 2026-03-28 01:23:28 | INFO  | Setting property provided_until: none 2026-03-28 01:23:31.514705 | orchestrator | 2026-03-28 01:23:28 | INFO  | Setting property os_purpose: network 2026-03-28 01:23:31.514721 | orchestrator | 2026-03-28 01:23:29 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-28 01:23:31.514727 | orchestrator | 2026-03-28 01:23:29 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-28 01:23:31.514732 | orchestrator | 2026-03-28 01:23:29 | INFO  | Setting property internal_version: 2026-03-27 2026-03-28 01:23:31.514794 | orchestrator | 2026-03-28 01:23:29 | INFO  | Setting property image_original_user: ubuntu 2026-03-28 01:23:31.514800 | orchestrator | 2026-03-28 01:23:30 | INFO  | Setting property os_version: 2026-03-27 2026-03-28 01:23:31.514805 | orchestrator | 2026-03-28 01:23:30 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260327.qcow2 2026-03-28 01:23:31.514810 | orchestrator | 2026-03-28 01:23:30 | INFO  | Setting property image_build_date: 2026-03-27 2026-03-28 01:23:31.514815 | orchestrator | 2026-03-28 01:23:31 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-27' 2026-03-28 01:23:31.514820 | orchestrator | 2026-03-28 01:23:31 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-27' 2026-03-28 01:23:31.514841 | orchestrator | 2026-03-28 01:23:31 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-28 01:23:31.514846 | orchestrator | 2026-03-28 01:23:31 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-28 01:23:31.514853 | orchestrator | 2026-03-28 01:23:31 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-28 01:23:31.514858 | orchestrator | 2026-03-28 01:23:31 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-28 01:23:32.305031 | orchestrator | ok: Runtime: 0:03:08.356996 2026-03-28 01:23:32.329580 | 2026-03-28 01:23:32.329772 | TASK [Run checks] 2026-03-28 01:23:33.010605 | orchestrator | + set -e 2026-03-28 01:23:33.010896 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 01:23:33.010930 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 01:23:33.010952 | orchestrator | ++ INTERACTIVE=false 2026-03-28 01:23:33.010967 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 01:23:33.010980 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 01:23:33.010996 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 01:23:33.011059 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 01:23:33.018096 | orchestrator | 2026-03-28 01:23:33.018233 | orchestrator | # CHECK 2026-03-28 01:23:33.018258 | orchestrator | 2026-03-28 01:23:33.018277 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 01:23:33.018301 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 01:23:33.018319 | orchestrator | + echo 2026-03-28 01:23:33.018339 | orchestrator | + echo '# CHECK' 2026-03-28 01:23:33.018357 | orchestrator | + echo 2026-03-28 01:23:33.018380 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 01:23:33.018529 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-28 01:23:33.082094 | orchestrator | 2026-03-28 01:23:33.082168 | orchestrator | ## Containers @ testbed-manager 2026-03-28 01:23:33.082175 | orchestrator | 2026-03-28 01:23:33.082181 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-28 01:23:33.082185 | orchestrator | + echo 2026-03-28 01:23:33.082190 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-28 01:23:33.082194 | orchestrator | + echo 2026-03-28 01:23:33.082199 | orchestrator | + osism container testbed-manager ps 2026-03-28 01:23:35.215794 | orchestrator | 2026-03-28 01:23:35 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-28 01:23:35.609556 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 01:23:35.609712 | orchestrator | e32feb3344ea registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2026-03-28 01:23:35.609795 | orchestrator | 4d40dceeda75 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_alertmanager 2026-03-28 01:23:35.609814 | orchestrator | c6946175e012 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-03-28 01:23:35.609838 | orchestrator | f315e55d80cd registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-03-28 01:23:35.609855 | orchestrator | 548aec1f9825 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2026-03-28 01:23:35.609879 | orchestrator | 813543895a0c registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 19 minutes ago Up 19 minutes cephclient 2026-03-28 01:23:35.609898 | orchestrator | 8d7330fda071 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes cron 2026-03-28 01:23:35.609914 | orchestrator | 5578c41b1ff8 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-03-28 01:23:35.609965 | orchestrator | bbcd282265f3 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 34 minutes ago Up 33 minutes fluentd 2026-03-28 01:23:35.609983 | orchestrator | 2cd38a39b309 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 34 minutes ago Up 34 minutes (healthy) 80/tcp phpmyadmin 2026-03-28 01:23:35.609999 | orchestrator | 2a3650af4a07 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 35 minutes ago Up 34 minutes openstackclient 2026-03-28 01:23:35.610069 | orchestrator | f8b183a47807 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 35 minutes ago Up 35 minutes (healthy) 8080/tcp homer 2026-03-28 01:23:35.610094 | orchestrator | 48b820c6a396 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 59 minutes ago Up 58 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-28 01:23:35.610119 | orchestrator | 2cff791f585d registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" About an hour ago Up 41 minutes (healthy) manager-inventory_reconciler-1 2026-03-28 01:23:35.610163 | orchestrator | 03590dc375f5 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 42 minutes (healthy) osism-ansible 2026-03-28 01:23:35.610182 | orchestrator | 2565e4eb5bf7 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 42 minutes (healthy) ceph-ansible 2026-03-28 01:23:35.610200 | orchestrator | 2e1480dd066a registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 42 minutes (healthy) kolla-ansible 2026-03-28 01:23:35.610219 | orchestrator | 5ea50f777e6e registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 42 minutes (healthy) osism-kubernetes 2026-03-28 01:23:35.610238 | orchestrator | 2ef31be801ea registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 42 minutes (healthy) 8000/tcp manager-ara-server-1 2026-03-28 01:23:35.610258 | orchestrator | fb8aafdf9a63 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" About an hour ago Up 42 minutes (healthy) osismclient 2026-03-28 01:23:35.610276 | orchestrator | 5f94ad158cc4 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-28 01:23:35.610293 | orchestrator | 37951de84332 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 42 minutes (healthy) 3306/tcp manager-mariadb-1 2026-03-28 01:23:35.610324 | orchestrator | 45b0c059f349 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 42 minutes (healthy) 6379/tcp manager-redis-1 2026-03-28 01:23:35.610341 | orchestrator | 101992f22aa2 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-flower-1 2026-03-28 01:23:35.610359 | orchestrator | f52e520e0867 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-listener-1 2026-03-28 01:23:35.610376 | orchestrator | b3cad0b538fb registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-beat-1 2026-03-28 01:23:35.610402 | orchestrator | 1a8d4f6b6035 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-openstack-1 2026-03-28 01:23:35.610421 | orchestrator | 393416c7e32f registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" About an hour ago Up 42 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-28 01:23:35.610438 | orchestrator | e6fcf5562b17 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-28 01:23:35.933307 | orchestrator | 2026-03-28 01:23:35.933421 | orchestrator | ## Images @ testbed-manager 2026-03-28 01:23:35.933440 | orchestrator | 2026-03-28 01:23:35.933452 | orchestrator | + echo 2026-03-28 01:23:35.933464 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-28 01:23:35.933477 | orchestrator | + echo 2026-03-28 01:23:35.933488 | orchestrator | + osism container testbed-manager images 2026-03-28 01:23:38.360588 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 01:23:38.360703 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 4f363275599b 21 hours ago 239MB 2026-03-28 01:23:38.360720 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 8 weeks ago 41.4MB 2026-03-28 01:23:38.360759 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-28 01:23:38.360772 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 3 months ago 608MB 2026-03-28 01:23:38.360783 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-28 01:23:38.360794 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-28 01:23:38.360805 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-28 01:23:38.360816 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 3 months ago 308MB 2026-03-28 01:23:38.360826 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-28 01:23:38.360917 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 3 months ago 404MB 2026-03-28 01:23:38.360930 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 3 months ago 839MB 2026-03-28 01:23:38.360941 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-28 01:23:38.360957 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 3 months ago 330MB 2026-03-28 01:23:38.360975 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 3 months ago 613MB 2026-03-28 01:23:38.360992 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 3 months ago 560MB 2026-03-28 01:23:38.361011 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 3 months ago 1.23GB 2026-03-28 01:23:38.361030 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 3 months ago 383MB 2026-03-28 01:23:38.361048 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 3 months ago 238MB 2026-03-28 01:23:38.361063 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-28 01:23:38.361074 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-28 01:23:38.361084 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-03-28 01:23:38.361095 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-28 01:23:38.361106 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 10 months ago 453MB 2026-03-28 01:23:38.361116 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-28 01:23:38.717105 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 01:23:38.717604 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-28 01:23:38.777192 | orchestrator | 2026-03-28 01:23:38.777269 | orchestrator | ## Containers @ testbed-node-0 2026-03-28 01:23:38.777278 | orchestrator | 2026-03-28 01:23:38.777283 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-28 01:23:38.777288 | orchestrator | + echo 2026-03-28 01:23:38.777293 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-28 01:23:38.777299 | orchestrator | + echo 2026-03-28 01:23:38.777303 | orchestrator | + osism container testbed-node-0 ps 2026-03-28 01:23:41.404438 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 01:23:41.404544 | orchestrator | cb50b2ee20fc registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-28 01:23:41.404561 | orchestrator | e3876daa4b13 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-28 01:23:41.404573 | orchestrator | e32e49445096 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-28 01:23:41.404584 | orchestrator | 103adfe92067 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-28 01:23:41.404595 | orchestrator | 78219d8bd42b registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-03-28 01:23:41.404649 | orchestrator | 779472d6f4e9 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-03-28 01:23:41.404662 | orchestrator | a41961c5365a registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-03-28 01:23:41.404673 | orchestrator | e382109f9b33 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2026-03-28 01:23:41.404684 | orchestrator | 6341c44c80e6 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-03-28 01:23:41.404695 | orchestrator | 62a01d7fb49c registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2026-03-28 01:23:41.404705 | orchestrator | 43ad764ab86a registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-28 01:23:41.404716 | orchestrator | 925c7b746901 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-03-28 01:23:41.404727 | orchestrator | cb087d0bde96 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-03-28 01:23:41.404776 | orchestrator | ddbef96cc5f9 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2026-03-28 01:23:41.404788 | orchestrator | 589d11c1a3b4 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2026-03-28 01:23:41.404829 | orchestrator | d4d375d0a21e registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2026-03-28 01:23:41.404840 | orchestrator | 34f680eafaf1 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-03-28 01:23:41.404851 | orchestrator | 7a2a642a34dc registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-03-28 01:23:41.404862 | orchestrator | 8c5223bd0086 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-03-28 01:23:41.404891 | orchestrator | d477c6630dee registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-03-28 01:23:41.404903 | orchestrator | 897751a19efd registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_conductor 2026-03-28 01:23:41.404914 | orchestrator | 05312b010354 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2026-03-28 01:23:41.404924 | orchestrator | f12b8a60171a registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_api 2026-03-28 01:23:41.404944 | orchestrator | bd6fe1ce73d7 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-03-28 01:23:41.404962 | orchestrator | 09f432a244da registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2026-03-28 01:23:41.404974 | orchestrator | 26dde9ae41c0 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2026-03-28 01:23:41.404984 | orchestrator | ce4072fc7e56 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2026-03-28 01:23:41.405001 | orchestrator | 234c22e30da3 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2026-03-28 01:23:41.405012 | orchestrator | 7971e441677a registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2026-03-28 01:23:41.405022 | orchestrator | 1ef9af9c167a registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-03-28 01:23:41.405033 | orchestrator | c88b6ddc8ffb registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-03-28 01:23:41.405044 | orchestrator | 3ea52f742da5 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2026-03-28 01:23:41.405054 | orchestrator | 6d4ef06dc879 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2026-03-28 01:23:41.405065 | orchestrator | 4e54e52a29c6 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-0 2026-03-28 01:23:41.405088 | orchestrator | 0e7ab476ebd1 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-28 01:23:41.405105 | orchestrator | 7f81d6c889cb registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-03-28 01:23:41.405116 | orchestrator | cc728cb88417 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2026-03-28 01:23:41.405127 | orchestrator | ec1dd46d3637 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) horizon 2026-03-28 01:23:41.405138 | orchestrator | 00ae7a140862 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-03-28 01:23:41.405149 | orchestrator | 19ed1f80fd0d registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch_dashboards 2026-03-28 01:23:41.405167 | orchestrator | 0face0c94ad1 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) opensearch 2026-03-28 01:23:41.405179 | orchestrator | 488d82882086 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-0 2026-03-28 01:23:41.405196 | orchestrator | 723615782b40 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2026-03-28 01:23:41.405207 | orchestrator | 7c5b90dcf7c1 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2026-03-28 01:23:41.405218 | orchestrator | 8204ed8efd46 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2026-03-28 01:23:41.405229 | orchestrator | 7b77d7d38e3c registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2026-03-28 01:23:41.405239 | orchestrator | ea4275bba5b4 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2026-03-28 01:23:41.405250 | orchestrator | df681982c78e registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_nb_db 2026-03-28 01:23:41.405261 | orchestrator | 45c0ea2460d3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-0 2026-03-28 01:23:41.405272 | orchestrator | 47728e2591af registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-03-28 01:23:41.405282 | orchestrator | 2ebbdda848c7 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) rabbitmq 2026-03-28 01:23:41.405293 | orchestrator | 86739714b2f1 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-03-28 01:23:41.405304 | orchestrator | 3bdaf7717f36 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_db 2026-03-28 01:23:41.405314 | orchestrator | f5e9caa9bedb registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2026-03-28 01:23:41.405325 | orchestrator | 43c4ed3b65ae registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-03-28 01:23:41.405335 | orchestrator | 974e1e8f9305 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-03-28 01:23:41.405346 | orchestrator | 377e38d17833 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes cron 2026-03-28 01:23:41.405357 | orchestrator | 3fe61a9b852b registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-03-28 01:23:41.405367 | orchestrator | d12b578b119c registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes fluentd 2026-03-28 01:23:41.752105 | orchestrator | 2026-03-28 01:23:41.752216 | orchestrator | ## Images @ testbed-node-0 2026-03-28 01:23:41.752234 | orchestrator | 2026-03-28 01:23:41.752247 | orchestrator | + echo 2026-03-28 01:23:41.752287 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-28 01:23:41.752300 | orchestrator | + echo 2026-03-28 01:23:41.752311 | orchestrator | + osism container testbed-node-0 images 2026-03-28 01:23:44.247472 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 01:23:44.247593 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-28 01:23:44.247606 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-28 01:23:44.247615 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-28 01:23:44.247635 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-28 01:23:44.247656 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-28 01:23:44.247664 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-28 01:23:44.247672 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-28 01:23:44.247680 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-28 01:23:44.247689 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-28 01:23:44.247697 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-28 01:23:44.247705 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-28 01:23:44.247713 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-28 01:23:44.247721 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-28 01:23:44.247838 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-28 01:23:44.247850 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-28 01:23:44.247858 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-28 01:23:44.247866 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-28 01:23:44.247874 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-28 01:23:44.247882 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-28 01:23:44.247890 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-28 01:23:44.247898 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-28 01:23:44.247906 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-28 01:23:44.247914 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-28 01:23:44.247922 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-28 01:23:44.247930 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-28 01:23:44.247956 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-28 01:23:44.247964 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-28 01:23:44.247972 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-28 01:23:44.247980 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-28 01:23:44.247995 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-28 01:23:44.248004 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-28 01:23:44.248030 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-28 01:23:44.248040 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-28 01:23:44.248049 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-28 01:23:44.248064 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-28 01:23:44.248084 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-28 01:23:44.248103 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-28 01:23:44.248117 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-28 01:23:44.248131 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-28 01:23:44.248144 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-28 01:23:44.248158 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-28 01:23:44.248171 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-28 01:23:44.248185 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-28 01:23:44.248200 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-28 01:23:44.248213 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-28 01:23:44.248228 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-28 01:23:44.248243 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-28 01:23:44.248258 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-28 01:23:44.248270 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-28 01:23:44.248278 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-28 01:23:44.248286 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-28 01:23:44.248303 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-28 01:23:44.248311 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-28 01:23:44.248319 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-28 01:23:44.248326 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-28 01:23:44.248334 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-28 01:23:44.248342 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-28 01:23:44.248355 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-28 01:23:44.248363 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-28 01:23:44.248371 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-28 01:23:44.248379 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-28 01:23:44.248387 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-28 01:23:44.248395 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-28 01:23:44.248410 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-28 01:23:44.248419 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-28 01:23:44.614114 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 01:23:44.614506 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-28 01:23:44.668397 | orchestrator | 2026-03-28 01:23:44.668554 | orchestrator | ## Containers @ testbed-node-1 2026-03-28 01:23:44.668577 | orchestrator | 2026-03-28 01:23:44.668590 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-28 01:23:44.669497 | orchestrator | + echo 2026-03-28 01:23:44.669514 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-28 01:23:44.669525 | orchestrator | + echo 2026-03-28 01:23:44.669533 | orchestrator | + osism container testbed-node-1 ps 2026-03-28 01:23:47.259123 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 01:23:47.259194 | orchestrator | 78d88d27bbe1 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-28 01:23:47.259201 | orchestrator | f085bad42f6b registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-28 01:23:47.259206 | orchestrator | a8fb53e76233 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-28 01:23:47.259210 | orchestrator | bc7fe7c0c772 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-28 01:23:47.259214 | orchestrator | 80c5a3c32dd5 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-03-28 01:23:47.259233 | orchestrator | 35cc9763555e registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-03-28 01:23:47.259238 | orchestrator | f654c5966d91 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-03-28 01:23:47.259242 | orchestrator | 5fc4178825f9 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-03-28 01:23:47.259246 | orchestrator | 009ac881c02c registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2026-03-28 01:23:47.259250 | orchestrator | 63a89bc1d0a0 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-03-28 01:23:47.259254 | orchestrator | 814c8dcd5bf3 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-03-28 01:23:47.259261 | orchestrator | 61e444158a6b registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-28 01:23:47.259265 | orchestrator | d450308c6dd7 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-03-28 01:23:47.259283 | orchestrator | 762ad3948da6 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2026-03-28 01:23:47.259287 | orchestrator | 54bcf154ec84 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2026-03-28 01:23:47.259290 | orchestrator | e0fa4d36dd52 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2026-03-28 01:23:47.259319 | orchestrator | 2135c238099d registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-03-28 01:23:47.259323 | orchestrator | 48b9d7e4d0be registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-03-28 01:23:47.259327 | orchestrator | 58ee8836e3c0 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-03-28 01:23:47.259343 | orchestrator | 137ccc5840b0 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-03-28 01:23:47.259347 | orchestrator | dbc1b37e9377 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2026-03-28 01:23:47.259351 | orchestrator | 332938ad29b6 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_conductor 2026-03-28 01:23:47.259355 | orchestrator | b30da13566de registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_api 2026-03-28 01:23:47.259369 | orchestrator | 37fd7b887bc5 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-03-28 01:23:47.259374 | orchestrator | a2b5a9dc94dd registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2026-03-28 01:23:47.259378 | orchestrator | 891028ad5c4c registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2026-03-28 01:23:47.259381 | orchestrator | 2ad110c8cd72 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2026-03-28 01:23:47.259386 | orchestrator | bd76f21e04e1 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2026-03-28 01:23:47.259392 | orchestrator | 036db84d8cd2 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2026-03-28 01:23:47.259398 | orchestrator | f59042370f86 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-03-28 01:23:47.259404 | orchestrator | 0b0ec17f64aa registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-03-28 01:23:47.259410 | orchestrator | 91fdf4adc35c registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2026-03-28 01:23:47.259416 | orchestrator | 7a1ef9ce1c2b registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2026-03-28 01:23:47.259421 | orchestrator | 54cc65ce4adf registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-1 2026-03-28 01:23:47.259428 | orchestrator | 0cf5b3b1803d registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-28 01:23:47.259434 | orchestrator | 7d8f316ded7f registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-03-28 01:23:47.259445 | orchestrator | 951b4dee033a registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-03-28 01:23:47.259451 | orchestrator | 5d6d915200fe registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2026-03-28 01:23:47.259457 | orchestrator | 7c3564c2e059 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2026-03-28 01:23:47.259462 | orchestrator | f8d739ad8afe registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 24 minutes ago Up 24 minutes (healthy) mariadb 2026-03-28 01:23:47.259473 | orchestrator | f3668b5ef810 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-03-28 01:23:47.259484 | orchestrator | f4b87d2840b7 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-1 2026-03-28 01:23:47.259490 | orchestrator | bb143e87597b registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2026-03-28 01:23:47.259495 | orchestrator | 7f3617e843cb registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2026-03-28 01:23:47.259502 | orchestrator | 5dfae4f3479b registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2026-03-28 01:23:47.259507 | orchestrator | 7566773775b6 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2026-03-28 01:23:47.259514 | orchestrator | 4e126ad38736 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2026-03-28 01:23:47.259519 | orchestrator | 390e01dd5fc0 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_nb_db 2026-03-28 01:23:47.259525 | orchestrator | 89235080f613 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-03-28 01:23:47.259531 | orchestrator | 927fe19ef5a9 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2026-03-28 01:23:47.259537 | orchestrator | 8b9de3ad6fbb registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-1 2026-03-28 01:23:47.259543 | orchestrator | 38d9a37efa55 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-03-28 01:23:47.259549 | orchestrator | a05fc7683423 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_db 2026-03-28 01:23:47.259555 | orchestrator | ae8a48fd43b9 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2026-03-28 01:23:47.259562 | orchestrator | 07ba441e030d registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-03-28 01:23:47.259568 | orchestrator | 6efaaf467e5a registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-03-28 01:23:47.259575 | orchestrator | e6beefc7e6fb registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes cron 2026-03-28 01:23:47.259582 | orchestrator | 40fd93bfc1b1 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-03-28 01:23:47.259588 | orchestrator | 2c1993fd74fa registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes fluentd 2026-03-28 01:23:47.610382 | orchestrator | 2026-03-28 01:23:47.610467 | orchestrator | ## Images @ testbed-node-1 2026-03-28 01:23:47.610502 | orchestrator | 2026-03-28 01:23:47.610513 | orchestrator | + echo 2026-03-28 01:23:47.610522 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-28 01:23:47.610532 | orchestrator | + echo 2026-03-28 01:23:47.610541 | orchestrator | + osism container testbed-node-1 images 2026-03-28 01:23:50.152643 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 01:23:50.152834 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-28 01:23:50.152889 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-28 01:23:50.152957 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-28 01:23:50.152978 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-28 01:23:50.153014 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-28 01:23:50.153029 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-28 01:23:50.153038 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-28 01:23:50.153047 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-28 01:23:50.153056 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-28 01:23:50.153065 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-28 01:23:50.153073 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-28 01:23:50.153086 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-28 01:23:50.153095 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-28 01:23:50.153104 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-28 01:23:50.153113 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-28 01:23:50.153121 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-28 01:23:50.153129 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-28 01:23:50.153138 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-28 01:23:50.153147 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-28 01:23:50.153155 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-28 01:23:50.153164 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-28 01:23:50.153172 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-28 01:23:50.153181 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-28 01:23:50.153189 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-28 01:23:50.153217 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-28 01:23:50.153226 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-28 01:23:50.153235 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-28 01:23:50.153243 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-28 01:23:50.153252 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-28 01:23:50.153260 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-28 01:23:50.153268 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-28 01:23:50.153294 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-28 01:23:50.153303 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-28 01:23:50.153313 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-28 01:23:50.153321 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-28 01:23:50.153330 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-28 01:23:50.153338 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-28 01:23:50.153347 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-28 01:23:50.153355 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-28 01:23:50.153364 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-28 01:23:50.153372 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-28 01:23:50.153381 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-28 01:23:50.153390 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-28 01:23:50.153398 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-28 01:23:50.153406 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-28 01:23:50.153415 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-28 01:23:50.153423 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-28 01:23:50.153432 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-28 01:23:50.153441 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-28 01:23:50.153466 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-28 01:23:50.153481 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-28 01:23:50.153495 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-28 01:23:50.153510 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-28 01:23:50.153525 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-28 01:23:50.153540 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-28 01:23:50.153553 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-28 01:23:50.153566 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-28 01:23:50.514803 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 01:23:50.514996 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-28 01:23:50.569074 | orchestrator | 2026-03-28 01:23:50.569170 | orchestrator | ## Containers @ testbed-node-2 2026-03-28 01:23:50.569186 | orchestrator | 2026-03-28 01:23:50.569198 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-28 01:23:50.569209 | orchestrator | + echo 2026-03-28 01:23:50.569220 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-28 01:23:50.569232 | orchestrator | + echo 2026-03-28 01:23:50.569243 | orchestrator | + osism container testbed-node-2 ps 2026-03-28 01:23:53.114908 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 01:23:53.115003 | orchestrator | b77819d71f5f registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-28 01:23:53.115017 | orchestrator | 18ef9e6b4ff8 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-28 01:23:53.115028 | orchestrator | 9f4c2253adea registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-28 01:23:53.115037 | orchestrator | 701a2f2baffc registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-03-28 01:23:53.115047 | orchestrator | 10c4359d7a7a registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-03-28 01:23:53.115057 | orchestrator | 269c66402289 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-03-28 01:23:53.115067 | orchestrator | a6f3dbea356a registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-03-28 01:23:53.115076 | orchestrator | 27e9834d116a registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-03-28 01:23:53.115086 | orchestrator | b5777416587f registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2026-03-28 01:23:53.115095 | orchestrator | f0c197f6e86e registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-03-28 01:23:53.115139 | orchestrator | 25f05e909545 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-03-28 01:23:53.115149 | orchestrator | 9c45875d6397 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-28 01:23:53.115159 | orchestrator | 50d1cd63489e registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-03-28 01:23:53.115168 | orchestrator | eac35ce288a7 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2026-03-28 01:23:53.115178 | orchestrator | df2c004624c7 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2026-03-28 01:23:53.115187 | orchestrator | d01dab86b721 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 13 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2026-03-28 01:23:53.115199 | orchestrator | 2ac87df72edf registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-03-28 01:23:53.115209 | orchestrator | 48d2f60b15da registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-03-28 01:23:53.115218 | orchestrator | c9aa07d8bd80 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-03-28 01:23:53.115248 | orchestrator | 762ecea92de4 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-03-28 01:23:53.115267 | orchestrator | 8bc52f6491ab registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2026-03-28 01:23:53.115283 | orchestrator | edbfdfcdbd02 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_conductor 2026-03-28 01:23:53.115300 | orchestrator | 0f707387e48b registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_api 2026-03-28 01:23:53.115317 | orchestrator | 6fd275a994d7 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-03-28 01:23:53.115333 | orchestrator | 7e53c027ccd5 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2026-03-28 01:23:53.115372 | orchestrator | 23432e9bde4e registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2026-03-28 01:23:53.115392 | orchestrator | b12c5f373d6f registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2026-03-28 01:23:53.115412 | orchestrator | e8c5ad662257 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2026-03-28 01:23:53.115431 | orchestrator | af2305667b57 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2026-03-28 01:23:53.115451 | orchestrator | 55e230511c36 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-03-28 01:23:53.115468 | orchestrator | 82357bba8f76 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-03-28 01:23:53.115480 | orchestrator | 647ce57c9c5f registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2026-03-28 01:23:53.115490 | orchestrator | 2b6c6f4bcd94 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2026-03-28 01:23:53.115501 | orchestrator | f03aeb9a6c0f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-2 2026-03-28 01:23:53.115512 | orchestrator | c26c05315c6c registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-28 01:23:53.115523 | orchestrator | aa8bbaecdbd0 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-03-28 01:23:53.115534 | orchestrator | 3e1ab8c59713 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-03-28 01:23:53.115545 | orchestrator | 891f3874fde1 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2026-03-28 01:23:53.115556 | orchestrator | 4700d6d80463 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 23 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-03-28 01:23:53.115567 | orchestrator | 5a8e467acf54 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2026-03-28 01:23:53.115587 | orchestrator | 39731eacfcd8 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-03-28 01:23:53.115599 | orchestrator | 246f9749bc92 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-2 2026-03-28 01:23:53.115610 | orchestrator | 6ac08e6c6021 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2026-03-28 01:23:53.115620 | orchestrator | 1d842ec56a88 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2026-03-28 01:23:53.115631 | orchestrator | 5a8ac9f59d48 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2026-03-28 01:23:53.115641 | orchestrator | 47e5168f51d5 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2026-03-28 01:23:53.115653 | orchestrator | 09cd373257f8 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2026-03-28 01:23:53.115671 | orchestrator | 496c0d1b3e80 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 30 minutes ago Up 29 minutes ovn_nb_db 2026-03-28 01:23:53.115682 | orchestrator | a4d093032fd1 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2026-03-28 01:23:53.115694 | orchestrator | 285cf592d349 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-03-28 01:23:53.115705 | orchestrator | df6a25d5a7c9 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-2 2026-03-28 01:23:53.115716 | orchestrator | 353081eb0d76 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-03-28 01:23:53.115758 | orchestrator | f1f3dbb3617a registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_db 2026-03-28 01:23:53.115776 | orchestrator | d257a95c6dcb registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2026-03-28 01:23:53.115792 | orchestrator | b3d87e837a15 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-03-28 01:23:53.115808 | orchestrator | e3840f2447b0 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-03-28 01:23:53.115824 | orchestrator | b46d142e970c registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes cron 2026-03-28 01:23:53.115841 | orchestrator | 1e45f0ca67a9 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-03-28 01:23:53.115859 | orchestrator | 12f2639cd684 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes fluentd 2026-03-28 01:23:53.489860 | orchestrator | 2026-03-28 01:23:53.489965 | orchestrator | ## Images @ testbed-node-2 2026-03-28 01:23:53.489983 | orchestrator | 2026-03-28 01:23:53.489995 | orchestrator | + echo 2026-03-28 01:23:53.490007 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-28 01:23:53.490086 | orchestrator | + echo 2026-03-28 01:23:53.490102 | orchestrator | + osism container testbed-node-2 images 2026-03-28 01:23:56.108328 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 01:23:56.108420 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-28 01:23:56.108427 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-28 01:23:56.108434 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-28 01:23:56.108439 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-28 01:23:56.108460 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-28 01:23:56.108466 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-28 01:23:56.108493 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-28 01:23:56.108498 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-28 01:23:56.108503 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-28 01:23:56.108508 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-28 01:23:56.108513 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-28 01:23:56.108517 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-28 01:23:56.108522 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-28 01:23:56.108527 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-28 01:23:56.108532 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-28 01:23:56.108537 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-28 01:23:56.108541 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-28 01:23:56.108546 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-28 01:23:56.108551 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-28 01:23:56.108559 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-28 01:23:56.108564 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-28 01:23:56.108569 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-28 01:23:56.108574 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-28 01:23:56.108578 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-28 01:23:56.108584 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-28 01:23:56.108588 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-28 01:23:56.108593 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-28 01:23:56.108598 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-28 01:23:56.108603 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-28 01:23:56.108607 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-28 01:23:56.108612 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-28 01:23:56.108631 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-28 01:23:56.108640 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-28 01:23:56.108645 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-28 01:23:56.108650 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-28 01:23:56.108655 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-28 01:23:56.108660 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-28 01:23:56.108664 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-28 01:23:56.108669 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-28 01:23:56.108674 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-28 01:23:56.108679 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-28 01:23:56.108684 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-28 01:23:56.108688 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-28 01:23:56.108693 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-28 01:23:56.108698 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-28 01:23:56.108703 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-28 01:23:56.108708 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-28 01:23:56.108712 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-28 01:23:56.108717 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-28 01:23:56.108767 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-28 01:23:56.108773 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-28 01:23:56.108778 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-28 01:23:56.108783 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-28 01:23:56.108787 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-28 01:23:56.108793 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-28 01:23:56.108799 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-28 01:23:56.108804 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-28 01:23:56.486981 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-28 01:23:56.492920 | orchestrator | + set -e 2026-03-28 01:23:56.493034 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 01:23:56.494541 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 01:23:56.494584 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 01:23:56.494599 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 01:23:56.494608 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 01:23:56.494669 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 01:23:56.494681 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 01:23:56.494690 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 01:23:56.494699 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 01:23:56.494708 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 01:23:56.494717 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 01:23:56.494792 | orchestrator | ++ export ARA=false 2026-03-28 01:23:56.494802 | orchestrator | ++ ARA=false 2026-03-28 01:23:56.494811 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 01:23:56.494820 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 01:23:56.494829 | orchestrator | ++ export TEMPEST=true 2026-03-28 01:23:56.494837 | orchestrator | ++ TEMPEST=true 2026-03-28 01:23:56.494846 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 01:23:56.494854 | orchestrator | ++ IS_ZUUL=true 2026-03-28 01:23:56.494863 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.253 2026-03-28 01:23:56.494872 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.253 2026-03-28 01:23:56.494881 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 01:23:56.494889 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 01:23:56.494898 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 01:23:56.494906 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 01:23:56.494915 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 01:23:56.494924 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 01:23:56.494932 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 01:23:56.494941 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 01:23:56.494950 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 01:23:56.494963 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-28 01:23:56.505637 | orchestrator | + set -e 2026-03-28 01:23:56.505821 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 01:23:56.505841 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 01:23:56.505854 | orchestrator | ++ INTERACTIVE=false 2026-03-28 01:23:56.505864 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 01:23:56.505873 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 01:23:56.505951 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 01:23:56.507172 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 01:23:56.511539 | orchestrator | 2026-03-28 01:23:56.511604 | orchestrator | # Ceph status 2026-03-28 01:23:56.511618 | orchestrator | 2026-03-28 01:23:56.511629 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 01:23:56.511642 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 01:23:56.511654 | orchestrator | + echo 2026-03-28 01:23:56.511665 | orchestrator | + echo '# Ceph status' 2026-03-28 01:23:56.511676 | orchestrator | + echo 2026-03-28 01:23:56.511687 | orchestrator | + ceph -s 2026-03-28 01:23:57.087627 | orchestrator | cluster: 2026-03-28 01:23:57.087716 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-28 01:23:57.087773 | orchestrator | health: HEALTH_OK 2026-03-28 01:23:57.087784 | orchestrator | 2026-03-28 01:23:57.087791 | orchestrator | services: 2026-03-28 01:23:57.087799 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 30m) 2026-03-28 01:23:57.087813 | orchestrator | mgr: testbed-node-1(active, since 17m), standbys: testbed-node-2, testbed-node-0 2026-03-28 01:23:57.087822 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-28 01:23:57.087829 | orchestrator | osd: 6 osds: 6 up (since 26m), 6 in (since 27m) 2026-03-28 01:23:57.087837 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-28 01:23:57.087845 | orchestrator | 2026-03-28 01:23:57.087852 | orchestrator | data: 2026-03-28 01:23:57.087860 | orchestrator | volumes: 1/1 healthy 2026-03-28 01:23:57.087867 | orchestrator | pools: 14 pools, 401 pgs 2026-03-28 01:23:57.087875 | orchestrator | objects: 555 objects, 2.2 GiB 2026-03-28 01:23:57.087882 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-28 01:23:57.087890 | orchestrator | pgs: 401 active+clean 2026-03-28 01:23:57.087978 | orchestrator | 2026-03-28 01:23:57.135887 | orchestrator | 2026-03-28 01:23:57.135985 | orchestrator | # Ceph versions 2026-03-28 01:23:57.136001 | orchestrator | 2026-03-28 01:23:57.136013 | orchestrator | + echo 2026-03-28 01:23:57.136024 | orchestrator | + echo '# Ceph versions' 2026-03-28 01:23:57.136036 | orchestrator | + echo 2026-03-28 01:23:57.136072 | orchestrator | + ceph versions 2026-03-28 01:23:57.773629 | orchestrator | { 2026-03-28 01:23:57.773836 | orchestrator | "mon": { 2026-03-28 01:23:57.773874 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-28 01:23:57.773887 | orchestrator | }, 2026-03-28 01:23:57.773898 | orchestrator | "mgr": { 2026-03-28 01:23:57.773909 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-28 01:23:57.773921 | orchestrator | }, 2026-03-28 01:23:57.773932 | orchestrator | "osd": { 2026-03-28 01:23:57.773943 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-03-28 01:23:57.773954 | orchestrator | }, 2026-03-28 01:23:57.773965 | orchestrator | "mds": { 2026-03-28 01:23:57.773976 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-28 01:23:57.773987 | orchestrator | }, 2026-03-28 01:23:57.773998 | orchestrator | "rgw": { 2026-03-28 01:23:57.774009 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-28 01:23:57.774111 | orchestrator | }, 2026-03-28 01:23:57.774124 | orchestrator | "overall": { 2026-03-28 01:23:57.774136 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-03-28 01:23:57.774149 | orchestrator | } 2026-03-28 01:23:57.774160 | orchestrator | } 2026-03-28 01:23:57.821123 | orchestrator | 2026-03-28 01:23:57.821222 | orchestrator | # Ceph OSD tree 2026-03-28 01:23:57.821238 | orchestrator | 2026-03-28 01:23:57.821251 | orchestrator | + echo 2026-03-28 01:23:57.821263 | orchestrator | + echo '# Ceph OSD tree' 2026-03-28 01:23:57.821274 | orchestrator | + echo 2026-03-28 01:23:57.821286 | orchestrator | + ceph osd df tree 2026-03-28 01:23:58.362210 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-28 01:23:58.362307 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 421 MiB 113 GiB 5.91 1.00 - root default 2026-03-28 01:23:58.362321 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.90 1.00 - host testbed-node-3 2026-03-28 01:23:58.362332 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.16 1.21 215 up osd.1 2026-03-28 01:23:58.362360 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 952 MiB 882 MiB 1 KiB 70 MiB 19 GiB 4.65 0.79 175 up osd.5 2026-03-28 01:23:58.362371 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-4 2026-03-28 01:23:58.362380 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1000 MiB 930 MiB 1 KiB 70 MiB 19 GiB 4.89 0.83 174 up osd.0 2026-03-28 01:23:58.362390 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.92 1.17 218 up osd.3 2026-03-28 01:23:58.362400 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-03-28 01:23:58.362409 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.34 1.07 199 up osd.2 2026-03-28 01:23:58.362418 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.49 0.93 189 up osd.4 2026-03-28 01:23:58.362428 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 421 MiB 113 GiB 5.91 2026-03-28 01:23:58.362438 | orchestrator | MIN/MAX VAR: 0.79/1.21 STDDEV: 0.96 2026-03-28 01:23:58.404332 | orchestrator | 2026-03-28 01:23:58.404455 | orchestrator | # Ceph monitor status 2026-03-28 01:23:58.404473 | orchestrator | 2026-03-28 01:23:58.404485 | orchestrator | + echo 2026-03-28 01:23:58.404496 | orchestrator | + echo '# Ceph monitor status' 2026-03-28 01:23:58.404508 | orchestrator | + echo 2026-03-28 01:23:58.404519 | orchestrator | + ceph mon stat 2026-03-28 01:23:58.998318 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-28 01:23:59.068672 | orchestrator | 2026-03-28 01:23:59.068818 | orchestrator | # Ceph quorum status 2026-03-28 01:23:59.068836 | orchestrator | 2026-03-28 01:23:59.068848 | orchestrator | + echo 2026-03-28 01:23:59.068860 | orchestrator | + echo '# Ceph quorum status' 2026-03-28 01:23:59.068871 | orchestrator | + echo 2026-03-28 01:23:59.068882 | orchestrator | + ceph quorum_status 2026-03-28 01:23:59.068893 | orchestrator | + jq 2026-03-28 01:23:59.707070 | orchestrator | { 2026-03-28 01:23:59.707195 | orchestrator | "election_epoch": 6, 2026-03-28 01:23:59.707222 | orchestrator | "quorum": [ 2026-03-28 01:23:59.707242 | orchestrator | 0, 2026-03-28 01:23:59.707261 | orchestrator | 1, 2026-03-28 01:23:59.707280 | orchestrator | 2 2026-03-28 01:23:59.707298 | orchestrator | ], 2026-03-28 01:23:59.707317 | orchestrator | "quorum_names": [ 2026-03-28 01:23:59.707329 | orchestrator | "testbed-node-0", 2026-03-28 01:23:59.707340 | orchestrator | "testbed-node-1", 2026-03-28 01:23:59.707351 | orchestrator | "testbed-node-2" 2026-03-28 01:23:59.707362 | orchestrator | ], 2026-03-28 01:23:59.707373 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-28 01:23:59.707385 | orchestrator | "quorum_age": 1854, 2026-03-28 01:23:59.707396 | orchestrator | "features": { 2026-03-28 01:23:59.707407 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-28 01:23:59.707418 | orchestrator | "quorum_mon": [ 2026-03-28 01:23:59.707428 | orchestrator | "kraken", 2026-03-28 01:23:59.707439 | orchestrator | "luminous", 2026-03-28 01:23:59.707450 | orchestrator | "mimic", 2026-03-28 01:23:59.707460 | orchestrator | "osdmap-prune", 2026-03-28 01:23:59.707471 | orchestrator | "nautilus", 2026-03-28 01:23:59.707482 | orchestrator | "octopus", 2026-03-28 01:23:59.707492 | orchestrator | "pacific", 2026-03-28 01:23:59.707502 | orchestrator | "elector-pinging", 2026-03-28 01:23:59.707513 | orchestrator | "quincy", 2026-03-28 01:23:59.707524 | orchestrator | "reef" 2026-03-28 01:23:59.707534 | orchestrator | ] 2026-03-28 01:23:59.707545 | orchestrator | }, 2026-03-28 01:23:59.707555 | orchestrator | "monmap": { 2026-03-28 01:23:59.707566 | orchestrator | "epoch": 1, 2026-03-28 01:23:59.707576 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-28 01:23:59.707588 | orchestrator | "modified": "2026-03-28T00:52:41.159282Z", 2026-03-28 01:23:59.707599 | orchestrator | "created": "2026-03-28T00:52:41.159282Z", 2026-03-28 01:23:59.707609 | orchestrator | "min_mon_release": 18, 2026-03-28 01:23:59.707620 | orchestrator | "min_mon_release_name": "reef", 2026-03-28 01:23:59.707630 | orchestrator | "election_strategy": 1, 2026-03-28 01:23:59.707641 | orchestrator | "disallowed_leaders: ": "", 2026-03-28 01:23:59.707651 | orchestrator | "stretch_mode": false, 2026-03-28 01:23:59.707662 | orchestrator | "tiebreaker_mon": "", 2026-03-28 01:23:59.707672 | orchestrator | "removed_ranks: ": "", 2026-03-28 01:23:59.707683 | orchestrator | "features": { 2026-03-28 01:23:59.707693 | orchestrator | "persistent": [ 2026-03-28 01:23:59.707704 | orchestrator | "kraken", 2026-03-28 01:23:59.707714 | orchestrator | "luminous", 2026-03-28 01:23:59.707759 | orchestrator | "mimic", 2026-03-28 01:23:59.707772 | orchestrator | "osdmap-prune", 2026-03-28 01:23:59.707783 | orchestrator | "nautilus", 2026-03-28 01:23:59.707793 | orchestrator | "octopus", 2026-03-28 01:23:59.707804 | orchestrator | "pacific", 2026-03-28 01:23:59.707814 | orchestrator | "elector-pinging", 2026-03-28 01:23:59.707825 | orchestrator | "quincy", 2026-03-28 01:23:59.707835 | orchestrator | "reef" 2026-03-28 01:23:59.707846 | orchestrator | ], 2026-03-28 01:23:59.707856 | orchestrator | "optional": [] 2026-03-28 01:23:59.707867 | orchestrator | }, 2026-03-28 01:23:59.707878 | orchestrator | "mons": [ 2026-03-28 01:23:59.707888 | orchestrator | { 2026-03-28 01:23:59.707899 | orchestrator | "rank": 0, 2026-03-28 01:23:59.707909 | orchestrator | "name": "testbed-node-0", 2026-03-28 01:23:59.707920 | orchestrator | "public_addrs": { 2026-03-28 01:23:59.707930 | orchestrator | "addrvec": [ 2026-03-28 01:23:59.707941 | orchestrator | { 2026-03-28 01:23:59.707951 | orchestrator | "type": "v2", 2026-03-28 01:23:59.707963 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-28 01:23:59.707981 | orchestrator | "nonce": 0 2026-03-28 01:23:59.708000 | orchestrator | }, 2026-03-28 01:23:59.708018 | orchestrator | { 2026-03-28 01:23:59.708036 | orchestrator | "type": "v1", 2026-03-28 01:23:59.708054 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-28 01:23:59.708071 | orchestrator | "nonce": 0 2026-03-28 01:23:59.708089 | orchestrator | } 2026-03-28 01:23:59.708140 | orchestrator | ] 2026-03-28 01:23:59.708160 | orchestrator | }, 2026-03-28 01:23:59.708181 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-28 01:23:59.708200 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-28 01:23:59.708218 | orchestrator | "priority": 0, 2026-03-28 01:23:59.708237 | orchestrator | "weight": 0, 2026-03-28 01:23:59.708255 | orchestrator | "crush_location": "{}" 2026-03-28 01:23:59.708273 | orchestrator | }, 2026-03-28 01:23:59.708291 | orchestrator | { 2026-03-28 01:23:59.708311 | orchestrator | "rank": 1, 2026-03-28 01:23:59.708330 | orchestrator | "name": "testbed-node-1", 2026-03-28 01:23:59.708349 | orchestrator | "public_addrs": { 2026-03-28 01:23:59.708368 | orchestrator | "addrvec": [ 2026-03-28 01:23:59.708387 | orchestrator | { 2026-03-28 01:23:59.708407 | orchestrator | "type": "v2", 2026-03-28 01:23:59.708425 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-28 01:23:59.708443 | orchestrator | "nonce": 0 2026-03-28 01:23:59.708461 | orchestrator | }, 2026-03-28 01:23:59.708481 | orchestrator | { 2026-03-28 01:23:59.708500 | orchestrator | "type": "v1", 2026-03-28 01:23:59.708520 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-28 01:23:59.708536 | orchestrator | "nonce": 0 2026-03-28 01:23:59.708555 | orchestrator | } 2026-03-28 01:23:59.708573 | orchestrator | ] 2026-03-28 01:23:59.708592 | orchestrator | }, 2026-03-28 01:23:59.708611 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-28 01:23:59.708630 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-28 01:23:59.708647 | orchestrator | "priority": 0, 2026-03-28 01:23:59.708667 | orchestrator | "weight": 0, 2026-03-28 01:23:59.708679 | orchestrator | "crush_location": "{}" 2026-03-28 01:23:59.708689 | orchestrator | }, 2026-03-28 01:23:59.708700 | orchestrator | { 2026-03-28 01:23:59.708711 | orchestrator | "rank": 2, 2026-03-28 01:23:59.708751 | orchestrator | "name": "testbed-node-2", 2026-03-28 01:23:59.708764 | orchestrator | "public_addrs": { 2026-03-28 01:23:59.708774 | orchestrator | "addrvec": [ 2026-03-28 01:23:59.708785 | orchestrator | { 2026-03-28 01:23:59.708796 | orchestrator | "type": "v2", 2026-03-28 01:23:59.708806 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-28 01:23:59.708817 | orchestrator | "nonce": 0 2026-03-28 01:23:59.708828 | orchestrator | }, 2026-03-28 01:23:59.708838 | orchestrator | { 2026-03-28 01:23:59.708849 | orchestrator | "type": "v1", 2026-03-28 01:23:59.708861 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-28 01:23:59.708871 | orchestrator | "nonce": 0 2026-03-28 01:23:59.708882 | orchestrator | } 2026-03-28 01:23:59.708893 | orchestrator | ] 2026-03-28 01:23:59.708903 | orchestrator | }, 2026-03-28 01:23:59.708914 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-28 01:23:59.708924 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-28 01:23:59.708935 | orchestrator | "priority": 0, 2026-03-28 01:23:59.708945 | orchestrator | "weight": 0, 2026-03-28 01:23:59.708956 | orchestrator | "crush_location": "{}" 2026-03-28 01:23:59.708967 | orchestrator | } 2026-03-28 01:23:59.708977 | orchestrator | ] 2026-03-28 01:23:59.708988 | orchestrator | } 2026-03-28 01:23:59.708999 | orchestrator | } 2026-03-28 01:23:59.709009 | orchestrator | 2026-03-28 01:23:59.709020 | orchestrator | # Ceph free space status 2026-03-28 01:23:59.709031 | orchestrator | 2026-03-28 01:23:59.709042 | orchestrator | + echo 2026-03-28 01:23:59.709053 | orchestrator | + echo '# Ceph free space status' 2026-03-28 01:23:59.709064 | orchestrator | + echo 2026-03-28 01:23:59.709075 | orchestrator | + ceph df 2026-03-28 01:24:00.342405 | orchestrator | --- RAW STORAGE --- 2026-03-28 01:24:00.342495 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-28 01:24:00.342519 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-03-28 01:24:00.342529 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-03-28 01:24:00.342537 | orchestrator | 2026-03-28 01:24:00.342546 | orchestrator | --- POOLS --- 2026-03-28 01:24:00.342555 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-28 01:24:00.342564 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-28 01:24:00.342575 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-28 01:24:00.342589 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-28 01:24:00.342602 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-28 01:24:00.342652 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-28 01:24:00.342667 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-28 01:24:00.342681 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-28 01:24:00.342694 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-28 01:24:00.342706 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 53 GiB 2026-03-28 01:24:00.342747 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 01:24:00.342760 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 01:24:00.342771 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2026-03-28 01:24:00.342783 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 01:24:00.342794 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 01:24:00.402706 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-28 01:24:00.465580 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-28 01:24:00.465663 | orchestrator | + osism apply facts 2026-03-28 01:24:12.691643 | orchestrator | 2026-03-28 01:24:12 | INFO  | Task 271a22a1-b55e-404a-8123-5a78b7d47fb2 (facts) was prepared for execution. 2026-03-28 01:24:12.691844 | orchestrator | 2026-03-28 01:24:12 | INFO  | It takes a moment until task 271a22a1-b55e-404a-8123-5a78b7d47fb2 (facts) has been started and output is visible here. 2026-03-28 01:24:27.116655 | orchestrator | 2026-03-28 01:24:27.116850 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-28 01:24:27.116881 | orchestrator | 2026-03-28 01:24:27.116901 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 01:24:27.116922 | orchestrator | Saturday 28 March 2026 01:24:17 +0000 (0:00:00.318) 0:00:00.318 ******** 2026-03-28 01:24:27.116941 | orchestrator | ok: [testbed-manager] 2026-03-28 01:24:27.116960 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:27.116972 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:24:27.116983 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:24:27.116996 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:27.117014 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:27.117033 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:27.117052 | orchestrator | 2026-03-28 01:24:27.117070 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 01:24:27.117089 | orchestrator | Saturday 28 March 2026 01:24:18 +0000 (0:00:01.556) 0:00:01.875 ******** 2026-03-28 01:24:27.117109 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:24:27.117128 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:24:27.117147 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:24:27.117165 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:24:27.117184 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:27.117203 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:24:27.117223 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:24:27.117243 | orchestrator | 2026-03-28 01:24:27.117286 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 01:24:27.117306 | orchestrator | 2026-03-28 01:24:27.117325 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 01:24:27.117344 | orchestrator | Saturday 28 March 2026 01:24:20 +0000 (0:00:01.569) 0:00:03.445 ******** 2026-03-28 01:24:27.117362 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:24:27.117383 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:24:27.117402 | orchestrator | ok: [testbed-manager] 2026-03-28 01:24:27.117420 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:27.117439 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:27.117458 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:27.117477 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:27.117497 | orchestrator | 2026-03-28 01:24:27.117516 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 01:24:27.117567 | orchestrator | 2026-03-28 01:24:27.117589 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 01:24:27.117608 | orchestrator | Saturday 28 March 2026 01:24:25 +0000 (0:00:05.449) 0:00:08.894 ******** 2026-03-28 01:24:27.117626 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:24:27.117645 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:24:27.117663 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:24:27.117680 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:24:27.117696 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:27.117740 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:24:27.117756 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:24:27.117772 | orchestrator | 2026-03-28 01:24:27.117789 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:24:27.117815 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:24:27.117834 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:24:27.117851 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:24:27.117869 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:24:27.117888 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:24:27.117905 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:24:27.117921 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:24:27.117937 | orchestrator | 2026-03-28 01:24:27.117953 | orchestrator | 2026-03-28 01:24:27.117970 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:24:27.117987 | orchestrator | Saturday 28 March 2026 01:24:26 +0000 (0:00:00.613) 0:00:09.508 ******** 2026-03-28 01:24:27.118186 | orchestrator | =============================================================================== 2026-03-28 01:24:27.118208 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.45s 2026-03-28 01:24:27.118225 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.57s 2026-03-28 01:24:27.118242 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.56s 2026-03-28 01:24:27.118259 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2026-03-28 01:24:27.476660 | orchestrator | + osism validate ceph-mons 2026-03-28 01:25:01.521927 | orchestrator | 2026-03-28 01:25:01.522134 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-28 01:25:01.522169 | orchestrator | 2026-03-28 01:25:01.522189 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-28 01:25:01.522210 | orchestrator | Saturday 28 March 2026 01:24:44 +0000 (0:00:00.475) 0:00:00.475 ******** 2026-03-28 01:25:01.522231 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:01.522251 | orchestrator | 2026-03-28 01:25:01.522263 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-28 01:25:01.522274 | orchestrator | Saturday 28 March 2026 01:24:45 +0000 (0:00:00.940) 0:00:01.415 ******** 2026-03-28 01:25:01.522286 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:01.522297 | orchestrator | 2026-03-28 01:25:01.522308 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-28 01:25:01.522319 | orchestrator | Saturday 28 March 2026 01:24:46 +0000 (0:00:01.136) 0:00:02.552 ******** 2026-03-28 01:25:01.522355 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.522367 | orchestrator | 2026-03-28 01:25:01.522379 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-28 01:25:01.522390 | orchestrator | Saturday 28 March 2026 01:24:46 +0000 (0:00:00.154) 0:00:02.706 ******** 2026-03-28 01:25:01.522401 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.522412 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:25:01.522423 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:25:01.522433 | orchestrator | 2026-03-28 01:25:01.522444 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-28 01:25:01.522455 | orchestrator | Saturday 28 March 2026 01:24:47 +0000 (0:00:00.324) 0:00:03.030 ******** 2026-03-28 01:25:01.522466 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.522477 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:25:01.522488 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:25:01.522499 | orchestrator | 2026-03-28 01:25:01.522510 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-28 01:25:01.522521 | orchestrator | Saturday 28 March 2026 01:24:48 +0000 (0:00:01.094) 0:00:04.125 ******** 2026-03-28 01:25:01.522532 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:01.522543 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:25:01.522554 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:25:01.522565 | orchestrator | 2026-03-28 01:25:01.522576 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-28 01:25:01.522587 | orchestrator | Saturday 28 March 2026 01:24:48 +0000 (0:00:00.304) 0:00:04.429 ******** 2026-03-28 01:25:01.522598 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.522608 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:25:01.522619 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:25:01.522630 | orchestrator | 2026-03-28 01:25:01.522642 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:25:01.522653 | orchestrator | Saturday 28 March 2026 01:24:49 +0000 (0:00:00.648) 0:00:05.078 ******** 2026-03-28 01:25:01.522664 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.522674 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:25:01.522712 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:25:01.522725 | orchestrator | 2026-03-28 01:25:01.522736 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-28 01:25:01.522747 | orchestrator | Saturday 28 March 2026 01:24:49 +0000 (0:00:00.333) 0:00:05.411 ******** 2026-03-28 01:25:01.522758 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:01.522769 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:25:01.522780 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:25:01.522790 | orchestrator | 2026-03-28 01:25:01.522801 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-28 01:25:01.522827 | orchestrator | Saturday 28 March 2026 01:24:49 +0000 (0:00:00.337) 0:00:05.749 ******** 2026-03-28 01:25:01.522838 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.522849 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:25:01.522860 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:25:01.522870 | orchestrator | 2026-03-28 01:25:01.522881 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:25:01.522892 | orchestrator | Saturday 28 March 2026 01:24:50 +0000 (0:00:00.566) 0:00:06.315 ******** 2026-03-28 01:25:01.522903 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:01.522914 | orchestrator | 2026-03-28 01:25:01.522925 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:25:01.522935 | orchestrator | Saturday 28 March 2026 01:24:50 +0000 (0:00:00.273) 0:00:06.589 ******** 2026-03-28 01:25:01.522946 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:01.522957 | orchestrator | 2026-03-28 01:25:01.522968 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:25:01.522979 | orchestrator | Saturday 28 March 2026 01:24:51 +0000 (0:00:00.256) 0:00:06.846 ******** 2026-03-28 01:25:01.522990 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:01.523009 | orchestrator | 2026-03-28 01:25:01.523020 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:01.523031 | orchestrator | Saturday 28 March 2026 01:24:51 +0000 (0:00:00.252) 0:00:07.098 ******** 2026-03-28 01:25:01.523042 | orchestrator | 2026-03-28 01:25:01.523053 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:01.523064 | orchestrator | Saturday 28 March 2026 01:24:51 +0000 (0:00:00.072) 0:00:07.171 ******** 2026-03-28 01:25:01.523075 | orchestrator | 2026-03-28 01:25:01.523086 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:01.523097 | orchestrator | Saturday 28 March 2026 01:24:51 +0000 (0:00:00.096) 0:00:07.268 ******** 2026-03-28 01:25:01.523107 | orchestrator | 2026-03-28 01:25:01.523118 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:25:01.523129 | orchestrator | Saturday 28 March 2026 01:24:51 +0000 (0:00:00.076) 0:00:07.345 ******** 2026-03-28 01:25:01.523140 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:01.523151 | orchestrator | 2026-03-28 01:25:01.523162 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-28 01:25:01.523173 | orchestrator | Saturday 28 March 2026 01:24:51 +0000 (0:00:00.253) 0:00:07.598 ******** 2026-03-28 01:25:01.523184 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:01.523195 | orchestrator | 2026-03-28 01:25:01.523227 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-28 01:25:01.523239 | orchestrator | Saturday 28 March 2026 01:24:52 +0000 (0:00:00.265) 0:00:07.864 ******** 2026-03-28 01:25:01.523250 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.523261 | orchestrator | 2026-03-28 01:25:01.523272 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-28 01:25:01.523283 | orchestrator | Saturday 28 March 2026 01:24:52 +0000 (0:00:00.102) 0:00:07.966 ******** 2026-03-28 01:25:01.523293 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:25:01.523304 | orchestrator | 2026-03-28 01:25:01.523315 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-28 01:25:01.523326 | orchestrator | Saturday 28 March 2026 01:24:53 +0000 (0:00:01.703) 0:00:09.670 ******** 2026-03-28 01:25:01.523336 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.523347 | orchestrator | 2026-03-28 01:25:01.523358 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-28 01:25:01.523369 | orchestrator | Saturday 28 March 2026 01:24:54 +0000 (0:00:00.614) 0:00:10.285 ******** 2026-03-28 01:25:01.523379 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:01.523390 | orchestrator | 2026-03-28 01:25:01.523401 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-28 01:25:01.523411 | orchestrator | Saturday 28 March 2026 01:24:54 +0000 (0:00:00.153) 0:00:10.438 ******** 2026-03-28 01:25:01.523422 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.523433 | orchestrator | 2026-03-28 01:25:01.523444 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-28 01:25:01.523455 | orchestrator | Saturday 28 March 2026 01:24:54 +0000 (0:00:00.344) 0:00:10.782 ******** 2026-03-28 01:25:01.523466 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.523477 | orchestrator | 2026-03-28 01:25:01.523487 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-28 01:25:01.523498 | orchestrator | Saturday 28 March 2026 01:24:55 +0000 (0:00:00.330) 0:00:11.113 ******** 2026-03-28 01:25:01.523509 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:01.523520 | orchestrator | 2026-03-28 01:25:01.523531 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-28 01:25:01.523542 | orchestrator | Saturday 28 March 2026 01:24:55 +0000 (0:00:00.129) 0:00:11.242 ******** 2026-03-28 01:25:01.523552 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.523563 | orchestrator | 2026-03-28 01:25:01.523574 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-28 01:25:01.523585 | orchestrator | Saturday 28 March 2026 01:24:55 +0000 (0:00:00.128) 0:00:11.371 ******** 2026-03-28 01:25:01.523603 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.523614 | orchestrator | 2026-03-28 01:25:01.523624 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-28 01:25:01.523635 | orchestrator | Saturday 28 March 2026 01:24:55 +0000 (0:00:00.139) 0:00:11.511 ******** 2026-03-28 01:25:01.523646 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:25:01.523657 | orchestrator | 2026-03-28 01:25:01.523668 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-28 01:25:01.523679 | orchestrator | Saturday 28 March 2026 01:24:57 +0000 (0:00:01.387) 0:00:12.898 ******** 2026-03-28 01:25:01.523732 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.523744 | orchestrator | 2026-03-28 01:25:01.523754 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-28 01:25:01.523765 | orchestrator | Saturday 28 March 2026 01:24:57 +0000 (0:00:00.322) 0:00:13.221 ******** 2026-03-28 01:25:01.523776 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:01.523787 | orchestrator | 2026-03-28 01:25:01.523797 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-28 01:25:01.523808 | orchestrator | Saturday 28 March 2026 01:24:57 +0000 (0:00:00.141) 0:00:13.362 ******** 2026-03-28 01:25:01.523819 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:01.523830 | orchestrator | 2026-03-28 01:25:01.523840 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-28 01:25:01.523851 | orchestrator | Saturday 28 March 2026 01:24:57 +0000 (0:00:00.174) 0:00:13.537 ******** 2026-03-28 01:25:01.523862 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:01.523873 | orchestrator | 2026-03-28 01:25:01.523883 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-28 01:25:01.523894 | orchestrator | Saturday 28 March 2026 01:24:57 +0000 (0:00:00.152) 0:00:13.689 ******** 2026-03-28 01:25:01.523905 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:01.523916 | orchestrator | 2026-03-28 01:25:01.523926 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-28 01:25:01.523937 | orchestrator | Saturday 28 March 2026 01:24:58 +0000 (0:00:00.341) 0:00:14.031 ******** 2026-03-28 01:25:01.523948 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:01.523959 | orchestrator | 2026-03-28 01:25:01.523970 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-28 01:25:01.523980 | orchestrator | Saturday 28 March 2026 01:24:58 +0000 (0:00:00.272) 0:00:14.303 ******** 2026-03-28 01:25:01.523991 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:01.524001 | orchestrator | 2026-03-28 01:25:01.524012 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:25:01.524031 | orchestrator | Saturday 28 March 2026 01:24:58 +0000 (0:00:00.270) 0:00:14.574 ******** 2026-03-28 01:25:01.524043 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:01.524054 | orchestrator | 2026-03-28 01:25:01.524065 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:25:01.524075 | orchestrator | Saturday 28 March 2026 01:25:00 +0000 (0:00:01.951) 0:00:16.525 ******** 2026-03-28 01:25:01.524091 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:01.524102 | orchestrator | 2026-03-28 01:25:01.524112 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:25:01.524123 | orchestrator | Saturday 28 March 2026 01:25:01 +0000 (0:00:00.268) 0:00:16.794 ******** 2026-03-28 01:25:01.524134 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:01.524145 | orchestrator | 2026-03-28 01:25:01.524163 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:04.446248 | orchestrator | Saturday 28 March 2026 01:25:01 +0000 (0:00:00.287) 0:00:17.082 ******** 2026-03-28 01:25:04.446375 | orchestrator | 2026-03-28 01:25:04.446401 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:04.446458 | orchestrator | Saturday 28 March 2026 01:25:01 +0000 (0:00:00.074) 0:00:17.156 ******** 2026-03-28 01:25:04.446479 | orchestrator | 2026-03-28 01:25:04.446499 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:04.446520 | orchestrator | Saturday 28 March 2026 01:25:01 +0000 (0:00:00.070) 0:00:17.227 ******** 2026-03-28 01:25:04.446540 | orchestrator | 2026-03-28 01:25:04.446560 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-28 01:25:04.446580 | orchestrator | Saturday 28 March 2026 01:25:01 +0000 (0:00:00.077) 0:00:17.304 ******** 2026-03-28 01:25:04.446600 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:04.446621 | orchestrator | 2026-03-28 01:25:04.446642 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:25:04.446660 | orchestrator | Saturday 28 March 2026 01:25:03 +0000 (0:00:01.620) 0:00:18.925 ******** 2026-03-28 01:25:04.446710 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-28 01:25:04.446732 | orchestrator |  "msg": [ 2026-03-28 01:25:04.446753 | orchestrator |  "Validator run completed.", 2026-03-28 01:25:04.446774 | orchestrator |  "You can find the report file here:", 2026-03-28 01:25:04.446795 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-28T01:24:45+00:00-report.json", 2026-03-28 01:25:04.446816 | orchestrator |  "on the following host:", 2026-03-28 01:25:04.446834 | orchestrator |  "testbed-manager" 2026-03-28 01:25:04.446851 | orchestrator |  ] 2026-03-28 01:25:04.446870 | orchestrator | } 2026-03-28 01:25:04.446889 | orchestrator | 2026-03-28 01:25:04.446909 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:25:04.446931 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-28 01:25:04.446952 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:25:04.446972 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:25:04.446991 | orchestrator | 2026-03-28 01:25:04.447009 | orchestrator | 2026-03-28 01:25:04.447026 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:25:04.447044 | orchestrator | Saturday 28 March 2026 01:25:04 +0000 (0:00:00.921) 0:00:19.847 ******** 2026-03-28 01:25:04.447062 | orchestrator | =============================================================================== 2026-03-28 01:25:04.447080 | orchestrator | Aggregate test results step one ----------------------------------------- 1.95s 2026-03-28 01:25:04.447099 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.70s 2026-03-28 01:25:04.447117 | orchestrator | Write report file ------------------------------------------------------- 1.62s 2026-03-28 01:25:04.447136 | orchestrator | Gather status data ------------------------------------------------------ 1.39s 2026-03-28 01:25:04.447177 | orchestrator | Create report output directory ------------------------------------------ 1.14s 2026-03-28 01:25:04.447195 | orchestrator | Get container info ------------------------------------------------------ 1.09s 2026-03-28 01:25:04.447213 | orchestrator | Get timestamp for report file ------------------------------------------- 0.94s 2026-03-28 01:25:04.447233 | orchestrator | Print report file information ------------------------------------------- 0.92s 2026-03-28 01:25:04.447253 | orchestrator | Set test result to passed if container is existing ---------------------- 0.65s 2026-03-28 01:25:04.447272 | orchestrator | Set quorum test data ---------------------------------------------------- 0.61s 2026-03-28 01:25:04.447291 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.57s 2026-03-28 01:25:04.447310 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2026-03-28 01:25:04.447328 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.34s 2026-03-28 01:25:04.447365 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.34s 2026-03-28 01:25:04.447383 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-03-28 01:25:04.447402 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.33s 2026-03-28 01:25:04.447415 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2026-03-28 01:25:04.447426 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2026-03-28 01:25:04.447437 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-03-28 01:25:04.447448 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2026-03-28 01:25:04.793287 | orchestrator | + osism validate ceph-mgrs 2026-03-28 01:25:37.254262 | orchestrator | 2026-03-28 01:25:37.254379 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-28 01:25:37.254393 | orchestrator | 2026-03-28 01:25:37.254403 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-28 01:25:37.254412 | orchestrator | Saturday 28 March 2026 01:25:22 +0000 (0:00:00.489) 0:00:00.489 ******** 2026-03-28 01:25:37.254422 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:37.254431 | orchestrator | 2026-03-28 01:25:37.254440 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-28 01:25:37.254448 | orchestrator | Saturday 28 March 2026 01:25:22 +0000 (0:00:00.900) 0:00:01.389 ******** 2026-03-28 01:25:37.254457 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:37.254466 | orchestrator | 2026-03-28 01:25:37.254475 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-28 01:25:37.254483 | orchestrator | Saturday 28 March 2026 01:25:24 +0000 (0:00:01.072) 0:00:02.461 ******** 2026-03-28 01:25:37.254492 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:37.254502 | orchestrator | 2026-03-28 01:25:37.254511 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-28 01:25:37.254519 | orchestrator | Saturday 28 March 2026 01:25:24 +0000 (0:00:00.133) 0:00:02.594 ******** 2026-03-28 01:25:37.254528 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:37.254537 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:25:37.254546 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:25:37.254555 | orchestrator | 2026-03-28 01:25:37.254564 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-28 01:25:37.254572 | orchestrator | Saturday 28 March 2026 01:25:24 +0000 (0:00:00.314) 0:00:02.909 ******** 2026-03-28 01:25:37.254581 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:25:37.254590 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:37.254598 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:25:37.254607 | orchestrator | 2026-03-28 01:25:37.254616 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-28 01:25:37.254624 | orchestrator | Saturday 28 March 2026 01:25:25 +0000 (0:00:01.074) 0:00:03.983 ******** 2026-03-28 01:25:37.254633 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:37.254642 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:25:37.254651 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:25:37.254694 | orchestrator | 2026-03-28 01:25:37.254705 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-28 01:25:37.254714 | orchestrator | Saturday 28 March 2026 01:25:25 +0000 (0:00:00.329) 0:00:04.313 ******** 2026-03-28 01:25:37.254722 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:37.254731 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:25:37.254739 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:25:37.254748 | orchestrator | 2026-03-28 01:25:37.254757 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:25:37.254765 | orchestrator | Saturday 28 March 2026 01:25:26 +0000 (0:00:00.536) 0:00:04.849 ******** 2026-03-28 01:25:37.254778 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:37.254825 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:25:37.254841 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:25:37.254855 | orchestrator | 2026-03-28 01:25:37.254869 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-28 01:25:37.254883 | orchestrator | Saturday 28 March 2026 01:25:26 +0000 (0:00:00.318) 0:00:05.168 ******** 2026-03-28 01:25:37.254896 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:37.254911 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:25:37.254923 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:25:37.254937 | orchestrator | 2026-03-28 01:25:37.254949 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-28 01:25:37.254962 | orchestrator | Saturday 28 March 2026 01:25:27 +0000 (0:00:00.313) 0:00:05.481 ******** 2026-03-28 01:25:37.254975 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:37.254993 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:25:37.255011 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:25:37.255029 | orchestrator | 2026-03-28 01:25:37.255046 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:25:37.255064 | orchestrator | Saturday 28 March 2026 01:25:27 +0000 (0:00:00.513) 0:00:05.995 ******** 2026-03-28 01:25:37.255081 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:37.255098 | orchestrator | 2026-03-28 01:25:37.255197 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:25:37.255226 | orchestrator | Saturday 28 March 2026 01:25:27 +0000 (0:00:00.257) 0:00:06.252 ******** 2026-03-28 01:25:37.255246 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:37.255266 | orchestrator | 2026-03-28 01:25:37.255286 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:25:37.255307 | orchestrator | Saturday 28 March 2026 01:25:28 +0000 (0:00:00.285) 0:00:06.538 ******** 2026-03-28 01:25:37.255329 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:37.255349 | orchestrator | 2026-03-28 01:25:37.255370 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:37.255391 | orchestrator | Saturday 28 March 2026 01:25:28 +0000 (0:00:00.250) 0:00:06.788 ******** 2026-03-28 01:25:37.255412 | orchestrator | 2026-03-28 01:25:37.255433 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:37.255456 | orchestrator | Saturday 28 March 2026 01:25:28 +0000 (0:00:00.070) 0:00:06.859 ******** 2026-03-28 01:25:37.255476 | orchestrator | 2026-03-28 01:25:37.255496 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:37.255516 | orchestrator | Saturday 28 March 2026 01:25:28 +0000 (0:00:00.084) 0:00:06.943 ******** 2026-03-28 01:25:37.255536 | orchestrator | 2026-03-28 01:25:37.255554 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:25:37.255574 | orchestrator | Saturday 28 March 2026 01:25:28 +0000 (0:00:00.074) 0:00:07.018 ******** 2026-03-28 01:25:37.255595 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:37.255615 | orchestrator | 2026-03-28 01:25:37.255632 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-28 01:25:37.255644 | orchestrator | Saturday 28 March 2026 01:25:28 +0000 (0:00:00.253) 0:00:07.272 ******** 2026-03-28 01:25:37.255655 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:37.255691 | orchestrator | 2026-03-28 01:25:37.255729 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-28 01:25:37.255741 | orchestrator | Saturday 28 March 2026 01:25:29 +0000 (0:00:00.300) 0:00:07.573 ******** 2026-03-28 01:25:37.255751 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:37.255762 | orchestrator | 2026-03-28 01:25:37.255787 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-28 01:25:37.255808 | orchestrator | Saturday 28 March 2026 01:25:29 +0000 (0:00:00.109) 0:00:07.682 ******** 2026-03-28 01:25:37.255820 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:25:37.255831 | orchestrator | 2026-03-28 01:25:37.255841 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-28 01:25:37.255869 | orchestrator | Saturday 28 March 2026 01:25:31 +0000 (0:00:02.075) 0:00:09.758 ******** 2026-03-28 01:25:37.255880 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:37.255891 | orchestrator | 2026-03-28 01:25:37.255901 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-28 01:25:37.255912 | orchestrator | Saturday 28 March 2026 01:25:31 +0000 (0:00:00.468) 0:00:10.226 ******** 2026-03-28 01:25:37.255923 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:37.255933 | orchestrator | 2026-03-28 01:25:37.255945 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-28 01:25:37.255955 | orchestrator | Saturday 28 March 2026 01:25:32 +0000 (0:00:00.352) 0:00:10.579 ******** 2026-03-28 01:25:37.255966 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:37.255977 | orchestrator | 2026-03-28 01:25:37.255993 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-28 01:25:37.256013 | orchestrator | Saturday 28 March 2026 01:25:32 +0000 (0:00:00.152) 0:00:10.732 ******** 2026-03-28 01:25:37.256033 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:25:37.256053 | orchestrator | 2026-03-28 01:25:37.256074 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-28 01:25:37.256094 | orchestrator | Saturday 28 March 2026 01:25:32 +0000 (0:00:00.160) 0:00:10.892 ******** 2026-03-28 01:25:37.256114 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:37.256134 | orchestrator | 2026-03-28 01:25:37.256155 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-28 01:25:37.256177 | orchestrator | Saturday 28 March 2026 01:25:32 +0000 (0:00:00.284) 0:00:11.177 ******** 2026-03-28 01:25:37.256199 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:25:37.256215 | orchestrator | 2026-03-28 01:25:37.256226 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:25:37.256237 | orchestrator | Saturday 28 March 2026 01:25:33 +0000 (0:00:00.271) 0:00:11.449 ******** 2026-03-28 01:25:37.256247 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:37.256258 | orchestrator | 2026-03-28 01:25:37.256268 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:25:37.256279 | orchestrator | Saturday 28 March 2026 01:25:34 +0000 (0:00:01.344) 0:00:12.794 ******** 2026-03-28 01:25:37.256289 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:37.256300 | orchestrator | 2026-03-28 01:25:37.256310 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:25:37.256321 | orchestrator | Saturday 28 March 2026 01:25:34 +0000 (0:00:00.279) 0:00:13.073 ******** 2026-03-28 01:25:37.256332 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:37.256342 | orchestrator | 2026-03-28 01:25:37.256353 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:37.256364 | orchestrator | Saturday 28 March 2026 01:25:34 +0000 (0:00:00.277) 0:00:13.351 ******** 2026-03-28 01:25:37.256374 | orchestrator | 2026-03-28 01:25:37.256385 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:37.256396 | orchestrator | Saturday 28 March 2026 01:25:35 +0000 (0:00:00.074) 0:00:13.426 ******** 2026-03-28 01:25:37.256406 | orchestrator | 2026-03-28 01:25:37.256417 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:37.256428 | orchestrator | Saturday 28 March 2026 01:25:35 +0000 (0:00:00.072) 0:00:13.498 ******** 2026-03-28 01:25:37.256438 | orchestrator | 2026-03-28 01:25:37.256449 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-28 01:25:37.256460 | orchestrator | Saturday 28 March 2026 01:25:35 +0000 (0:00:00.279) 0:00:13.778 ******** 2026-03-28 01:25:37.256470 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:37.256481 | orchestrator | 2026-03-28 01:25:37.256492 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:25:37.256503 | orchestrator | Saturday 28 March 2026 01:25:36 +0000 (0:00:01.450) 0:00:15.228 ******** 2026-03-28 01:25:37.256536 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-28 01:25:37.256547 | orchestrator |  "msg": [ 2026-03-28 01:25:37.256559 | orchestrator |  "Validator run completed.", 2026-03-28 01:25:37.256993 | orchestrator |  "You can find the report file here:", 2026-03-28 01:25:37.257016 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-28T01:25:22+00:00-report.json", 2026-03-28 01:25:37.257027 | orchestrator |  "on the following host:", 2026-03-28 01:25:37.257039 | orchestrator |  "testbed-manager" 2026-03-28 01:25:37.257049 | orchestrator |  ] 2026-03-28 01:25:37.257061 | orchestrator | } 2026-03-28 01:25:37.257072 | orchestrator | 2026-03-28 01:25:37.257083 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:25:37.257095 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 01:25:37.257108 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:25:37.257135 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:25:37.630543 | orchestrator | 2026-03-28 01:25:37.630636 | orchestrator | 2026-03-28 01:25:37.630650 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:25:37.630710 | orchestrator | Saturday 28 March 2026 01:25:37 +0000 (0:00:00.425) 0:00:15.654 ******** 2026-03-28 01:25:37.630722 | orchestrator | =============================================================================== 2026-03-28 01:25:37.630733 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.08s 2026-03-28 01:25:37.630744 | orchestrator | Write report file ------------------------------------------------------- 1.45s 2026-03-28 01:25:37.630755 | orchestrator | Aggregate test results step one ----------------------------------------- 1.34s 2026-03-28 01:25:37.630766 | orchestrator | Get container info ------------------------------------------------------ 1.07s 2026-03-28 01:25:37.630777 | orchestrator | Create report output directory ------------------------------------------ 1.07s 2026-03-28 01:25:37.630788 | orchestrator | Get timestamp for report file ------------------------------------------- 0.90s 2026-03-28 01:25:37.630798 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2026-03-28 01:25:37.630809 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.51s 2026-03-28 01:25:37.630820 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.47s 2026-03-28 01:25:37.630831 | orchestrator | Flush handlers ---------------------------------------------------------- 0.43s 2026-03-28 01:25:37.630841 | orchestrator | Print report file information ------------------------------------------- 0.43s 2026-03-28 01:25:37.630852 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.35s 2026-03-28 01:25:37.630863 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2026-03-28 01:25:37.630874 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-03-28 01:25:37.630884 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-03-28 01:25:37.630895 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2026-03-28 01:25:37.630906 | orchestrator | Fail due to missing containers ------------------------------------------ 0.30s 2026-03-28 01:25:37.630916 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-03-28 01:25:37.630927 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2026-03-28 01:25:37.630938 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-03-28 01:25:37.993945 | orchestrator | + osism validate ceph-osds 2026-03-28 01:25:59.910362 | orchestrator | 2026-03-28 01:25:59.910497 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-28 01:25:59.910514 | orchestrator | 2026-03-28 01:25:59.910526 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-28 01:25:59.910538 | orchestrator | Saturday 28 March 2026 01:25:54 +0000 (0:00:00.443) 0:00:00.443 ******** 2026-03-28 01:25:59.910549 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:59.910560 | orchestrator | 2026-03-28 01:25:59.910571 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 01:25:59.910582 | orchestrator | Saturday 28 March 2026 01:25:55 +0000 (0:00:00.938) 0:00:01.382 ******** 2026-03-28 01:25:59.910593 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:59.910616 | orchestrator | 2026-03-28 01:25:59.910626 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-28 01:25:59.910637 | orchestrator | Saturday 28 March 2026 01:25:56 +0000 (0:00:00.559) 0:00:01.942 ******** 2026-03-28 01:25:59.910688 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:59.910699 | orchestrator | 2026-03-28 01:25:59.910710 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-28 01:25:59.910720 | orchestrator | Saturday 28 March 2026 01:25:57 +0000 (0:00:00.828) 0:00:02.770 ******** 2026-03-28 01:25:59.910731 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:25:59.910743 | orchestrator | 2026-03-28 01:25:59.910768 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-28 01:25:59.910779 | orchestrator | Saturday 28 March 2026 01:25:57 +0000 (0:00:00.160) 0:00:02.931 ******** 2026-03-28 01:25:59.910790 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:25:59.910800 | orchestrator | 2026-03-28 01:25:59.910811 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-28 01:25:59.910822 | orchestrator | Saturday 28 March 2026 01:25:57 +0000 (0:00:00.146) 0:00:03.077 ******** 2026-03-28 01:25:59.910832 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:25:59.910843 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:25:59.910900 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:25:59.910912 | orchestrator | 2026-03-28 01:25:59.910925 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-28 01:25:59.910937 | orchestrator | Saturday 28 March 2026 01:25:57 +0000 (0:00:00.325) 0:00:03.402 ******** 2026-03-28 01:25:59.910949 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:25:59.910961 | orchestrator | 2026-03-28 01:25:59.910974 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-28 01:25:59.910986 | orchestrator | Saturday 28 March 2026 01:25:58 +0000 (0:00:00.171) 0:00:03.574 ******** 2026-03-28 01:25:59.910998 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:25:59.911009 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:25:59.911022 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:25:59.911034 | orchestrator | 2026-03-28 01:25:59.911047 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-28 01:25:59.911059 | orchestrator | Saturday 28 March 2026 01:25:58 +0000 (0:00:00.336) 0:00:03.911 ******** 2026-03-28 01:25:59.911070 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:25:59.911082 | orchestrator | 2026-03-28 01:25:59.911094 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:25:59.911107 | orchestrator | Saturday 28 March 2026 01:25:59 +0000 (0:00:00.898) 0:00:04.810 ******** 2026-03-28 01:25:59.911118 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:25:59.911130 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:25:59.911142 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:25:59.911154 | orchestrator | 2026-03-28 01:25:59.911166 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-28 01:25:59.911178 | orchestrator | Saturday 28 March 2026 01:25:59 +0000 (0:00:00.312) 0:00:05.122 ******** 2026-03-28 01:25:59.911194 | orchestrator | skipping: [testbed-node-3] => (item={'id': '985ea2fc7b9d8d94fc6a7e0969eaec346e13827705c381ac01e579e6ff0d375c', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-28 01:25:59.911218 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e9620b52be238e4b179d2a54e13c2c4e055796f4683152fd4eafe05f0e41d768', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-28 01:25:59.911232 | orchestrator | skipping: [testbed-node-3] => (item={'id': '140be4d44eba37ecd65077749216f0cede30716538f8f09ea5c3034a3fa6537a', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-28 01:25:59.911247 | orchestrator | skipping: [testbed-node-3] => (item={'id': '917781e35cd44b130244987c1c9b4d500fa044ebed2c7cb31ef7f01edffe0fe2', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-28 01:25:59.911266 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f215402a8cdc8415e88d03a63773d246a2d67eae53b170eca2593df2394336fd', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-28 01:25:59.911309 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a4cb6e03c02a2adff35cee2ec80e9de9b71d43573576316ea10ac887f652aa62', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-28 01:25:59.911331 | orchestrator | skipping: [testbed-node-3] => (item={'id': '24a08624896c7ba243b892026028e8c3b865b81799e60dd338a866dc0f2f0e62', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2026-03-28 01:25:59.911348 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd361113b14771ae6e313ec12bce2cb54fa8a539424a094c86f2e10beaa0ace60', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-28 01:25:59.911367 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0e8a46ed43c7e99b1c6906eb227b5249316b1e2888db3162ef3fd39462eb5e07', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-28 01:25:59.911379 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2f9d7cda7039659ccc4215d2c7232ba7c1c522df733474459e6c40cf09054d6d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-28 01:25:59.911392 | orchestrator | ok: [testbed-node-3] => (item={'id': '00934709db3ab8d1f7c66283ee8f92cd848686abe323bfaa5e7f72c806cc3b7f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:25:59.911404 | orchestrator | ok: [testbed-node-3] => (item={'id': '56d713680cd0c6ee1dea9a219d388f4612867bd39986bc57a807700aac58137d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:25:59.911415 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c14b94d1f363e61de60d3a11a110786a08f3f0f671bdc2e909415ed428d8dec4', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-28 01:25:59.911426 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd5dee588447be0a86fbdff43bcdf609f95da94ac27d6dc88905e72b7ecbd6aff', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-03-28 01:25:59.911444 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9fe62e3bdc304f7c06dad35e2ff8ef78fa162c6d69f5961885065219405eabfa', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 33 minutes (healthy)'})  2026-03-28 01:25:59.911455 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6aacd2be1fd4a444052865e87e2b984fcd6a4e9448d4992b538c44c294f47c17', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:25:59.911467 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2b5efa709e71e46c24a8aee72929fb594a7ddeb4ac23394b7c1b02227b95c740', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 34 minutes'})  2026-03-28 01:25:59.911478 | orchestrator | skipping: [testbed-node-3] => (item={'id': '73360b32da8bfc1b834677c32cdd69ee2be058ab0461a7beadd5223de9c62429', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 34 minutes'})  2026-03-28 01:25:59.911489 | orchestrator | skipping: [testbed-node-4] => (item={'id': '84435904720ef01b3a87444ec13054d49daf3c95d20db337dfeb2f1f519c80ae', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-28 01:25:59.911500 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0f56b2b0474e93494d0c8320f6fd09d832b047420314de4e42344bad3366b307', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-28 01:25:59.911517 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'db448791243a3c3c2c799d2b3ad9549dea7d16ba63f2ebef53a110fd94c77984', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-28 01:26:00.175256 | orchestrator | skipping: [testbed-node-4] => (item={'id': '06c0400976e88eb051265efbc4fe970f1c813b1f0e2b6b48fa6d709ee5a0b89a', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-28 01:26:00.175360 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b9b712acdfe9f05afe6ef30b3082850af65f4a8de4b814f17af0302b9dacb9da', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-28 01:26:00.175384 | orchestrator | skipping: [testbed-node-4] => (item={'id': '49c6ad6d86029d0ed61fadb03fa2dc5c7f7de291559775eea5e07ec1249ca1f8', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-28 01:26:00.175404 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9507ae9d0df60954ef7310272e36fb367db985df686e8d8485664a97ec6238be', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2026-03-28 01:26:00.175424 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e7f95c8250351dc903c3f469f8a11baec29fcff401cd2e70ddf13188862c46fc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-28 01:26:00.175444 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f97bba6ee1cf725cdd2996cc4f3d4b7c686b1fa23db510d54b0f9193b1d4e349', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-28 01:26:00.175488 | orchestrator | skipping: [testbed-node-4] => (item={'id': '77f67cba89695700ddf42f5f436e7b04124a852a628be1028595bff5b02937c9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-28 01:26:00.175540 | orchestrator | ok: [testbed-node-4] => (item={'id': '3cdad71c64bc0c51faef365c7b8fa8f082675763b0212dd3faec398a877da34a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:26:00.175562 | orchestrator | ok: [testbed-node-4] => (item={'id': 'ab93394f5b9e041edf13299920a505ee28c028a1bb17e7ad24c420752500415a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:26:00.175581 | orchestrator | skipping: [testbed-node-4] => (item={'id': '52ef091de7cb173fc664e31a677f6fef6d117ac5eaef1572582072715c9f7ca8', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-28 01:26:00.175601 | orchestrator | skipping: [testbed-node-4] => (item={'id': '50749540a885b839839954aceda2319b0381edb4ce40473af85307349b8f2e63', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-03-28 01:26:00.175620 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6028297bd56584d0b617e9b56dc7ee529cce28c5ecb5960c87e842392b8189d5', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 33 minutes (healthy)'})  2026-03-28 01:26:00.175638 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9caa84cfd338646afdd065c59e53eea162d57a09247c1a2d5a9ffb0cfce1982a', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:26:00.175724 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2fbe97b2a85f83a98fdcee57c11525a23e11250ea0556c6a0aa6b9d5bcb14f70', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 34 minutes'})  2026-03-28 01:26:00.175771 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'df310ef13ab72f54c366664d97d732b239351c233e19e32eabfd8e41d8997430', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 34 minutes'})  2026-03-28 01:26:00.175792 | orchestrator | skipping: [testbed-node-5] => (item={'id': '23ac61e0691f34d10785cc2d2b7647469600c698d1ab614642999961681eae15', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-28 01:26:00.175807 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd2e84696d3515287cc99c2dc656e86faaca46af91e13717d2dde73f0cde2fda4', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-28 01:26:00.175828 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4b599cee77efb7669e4e4a0f12737fdf7da632b8d90a3c3b5ea049df35891a8a', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-28 01:26:00.175841 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2d8e85a5c3395e51e3b7176e07f247a09bdfcf7626f2102bfb17b053d0502e6f', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-28 01:26:00.175855 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0f50fe60f3a8ae43723d7848969883413d730a4c7fad5588dfe1b23c28357f58', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-28 01:26:00.175878 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8a0c2748556c4700082bfab3c21e7fd4977c895bc534e0c60099300685780c79', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-28 01:26:00.175891 | orchestrator | skipping: [testbed-node-5] => (item={'id': '04e2de6a3e5718e95b9dcd863c1a6126be8b3472427dee5293c4425c72e27da7', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2026-03-28 01:26:00.175904 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7c9c7a9d34b90a11f4e45ac036a8330a70d1878e4695bb4da7b48228fccbea75', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-28 01:26:00.175917 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6cd8e24e2b119e0cb98ce3c53a0a36c3e90f8447600615460289b307c57ef551', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-28 01:26:00.175930 | orchestrator | skipping: [testbed-node-5] => (item={'id': '86221d0f0779f124173f77f1f5f6cbac74f447c7df099b8417e5cf8059c44987', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-28 01:26:00.175943 | orchestrator | ok: [testbed-node-5] => (item={'id': '8d975d87c95442fb57780cfb6c6721d864992f348ef91242c90ceb10eb650877', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:26:00.175957 | orchestrator | ok: [testbed-node-5] => (item={'id': '73fc6c9c81055532eac05d177500967d37f7c38bdf3d7fba61beaba6788327d1', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:26:00.175970 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fec3ec5b0f343d2d59e8402f7a5a0177223fb3358ab54da5212a38f75ad995d2', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-28 01:26:00.175984 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7172a56170f4b4105cd7b681019720aa02082c4c84f008bd31bbfb3ee8f718e4', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-03-28 01:26:00.176005 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4dd25c15388d7d6e150da6e4d0613582d43683ad8973bce70139d618bd6b1f36', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 33 minutes (healthy)'})  2026-03-28 01:26:13.467585 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3771fe689f684c6c255a6e8028e18696a17d611491770edb36472da5bb1e212d', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:26:13.467779 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd93f454e4b392fdbabd55342b62a4d51fa47e871216572084333d0e468a63966', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 34 minutes'})  2026-03-28 01:26:13.467810 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cb7faadf6d842f8d14b5ca594d006e1be46098b7b52a302e2e79082f4743a3bf', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 34 minutes'})  2026-03-28 01:26:13.467823 | orchestrator | 2026-03-28 01:26:13.467834 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-28 01:26:13.467863 | orchestrator | Saturday 28 March 2026 01:26:00 +0000 (0:00:00.491) 0:00:05.614 ******** 2026-03-28 01:26:13.467874 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:13.467884 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:13.467894 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:13.467903 | orchestrator | 2026-03-28 01:26:13.467920 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-28 01:26:13.467942 | orchestrator | Saturday 28 March 2026 01:26:00 +0000 (0:00:00.341) 0:00:05.955 ******** 2026-03-28 01:26:13.467963 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:13.467980 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:26:13.467995 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:26:13.468010 | orchestrator | 2026-03-28 01:26:13.468027 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-28 01:26:13.468042 | orchestrator | Saturday 28 March 2026 01:26:00 +0000 (0:00:00.507) 0:00:06.462 ******** 2026-03-28 01:26:13.468057 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:13.468072 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:13.468087 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:13.468104 | orchestrator | 2026-03-28 01:26:13.468121 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:26:13.468139 | orchestrator | Saturday 28 March 2026 01:26:01 +0000 (0:00:00.313) 0:00:06.776 ******** 2026-03-28 01:26:13.468157 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:13.468174 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:13.468189 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:13.468200 | orchestrator | 2026-03-28 01:26:13.468211 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-28 01:26:13.468222 | orchestrator | Saturday 28 March 2026 01:26:01 +0000 (0:00:00.308) 0:00:07.085 ******** 2026-03-28 01:26:13.468234 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-28 01:26:13.468245 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-28 01:26:13.468256 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:13.468267 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-28 01:26:13.468278 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-28 01:26:13.468289 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:26:13.468300 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-28 01:26:13.468311 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-28 01:26:13.468322 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:26:13.468333 | orchestrator | 2026-03-28 01:26:13.468344 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-28 01:26:13.468355 | orchestrator | Saturday 28 March 2026 01:26:01 +0000 (0:00:00.328) 0:00:07.413 ******** 2026-03-28 01:26:13.468366 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:13.468377 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:13.468388 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:13.468398 | orchestrator | 2026-03-28 01:26:13.468409 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-28 01:26:13.468420 | orchestrator | Saturday 28 March 2026 01:26:02 +0000 (0:00:00.535) 0:00:07.948 ******** 2026-03-28 01:26:13.468431 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:13.468442 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:26:13.468453 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:26:13.468464 | orchestrator | 2026-03-28 01:26:13.468475 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-28 01:26:13.468486 | orchestrator | Saturday 28 March 2026 01:26:02 +0000 (0:00:00.349) 0:00:08.298 ******** 2026-03-28 01:26:13.468497 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:13.468517 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:26:13.468526 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:26:13.468536 | orchestrator | 2026-03-28 01:26:13.468545 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-28 01:26:13.468554 | orchestrator | Saturday 28 March 2026 01:26:03 +0000 (0:00:00.304) 0:00:08.603 ******** 2026-03-28 01:26:13.468564 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:13.468578 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:13.468594 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:13.468608 | orchestrator | 2026-03-28 01:26:13.468624 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:26:13.468667 | orchestrator | Saturday 28 March 2026 01:26:03 +0000 (0:00:00.344) 0:00:08.948 ******** 2026-03-28 01:26:13.468682 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:13.468696 | orchestrator | 2026-03-28 01:26:13.468735 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:26:13.468752 | orchestrator | Saturday 28 March 2026 01:26:04 +0000 (0:00:00.823) 0:00:09.772 ******** 2026-03-28 01:26:13.468769 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:13.468785 | orchestrator | 2026-03-28 01:26:13.468802 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:26:13.468819 | orchestrator | Saturday 28 March 2026 01:26:04 +0000 (0:00:00.252) 0:00:10.024 ******** 2026-03-28 01:26:13.468836 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:13.468850 | orchestrator | 2026-03-28 01:26:13.468860 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:26:13.468869 | orchestrator | Saturday 28 March 2026 01:26:04 +0000 (0:00:00.283) 0:00:10.308 ******** 2026-03-28 01:26:13.468879 | orchestrator | 2026-03-28 01:26:13.468888 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:26:13.468897 | orchestrator | Saturday 28 March 2026 01:26:04 +0000 (0:00:00.069) 0:00:10.377 ******** 2026-03-28 01:26:13.468907 | orchestrator | 2026-03-28 01:26:13.468916 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:26:13.468926 | orchestrator | Saturday 28 March 2026 01:26:04 +0000 (0:00:00.067) 0:00:10.445 ******** 2026-03-28 01:26:13.468935 | orchestrator | 2026-03-28 01:26:13.468945 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:26:13.468955 | orchestrator | Saturday 28 March 2026 01:26:05 +0000 (0:00:00.073) 0:00:10.519 ******** 2026-03-28 01:26:13.468964 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:13.468973 | orchestrator | 2026-03-28 01:26:13.468983 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-28 01:26:13.468993 | orchestrator | Saturday 28 March 2026 01:26:05 +0000 (0:00:00.341) 0:00:10.861 ******** 2026-03-28 01:26:13.469002 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:13.469012 | orchestrator | 2026-03-28 01:26:13.469021 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:26:13.469031 | orchestrator | Saturday 28 March 2026 01:26:05 +0000 (0:00:00.257) 0:00:11.118 ******** 2026-03-28 01:26:13.469041 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:13.469050 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:13.469060 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:13.469069 | orchestrator | 2026-03-28 01:26:13.469078 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-28 01:26:13.469088 | orchestrator | Saturday 28 March 2026 01:26:05 +0000 (0:00:00.288) 0:00:11.407 ******** 2026-03-28 01:26:13.469097 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:13.469107 | orchestrator | 2026-03-28 01:26:13.469116 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-28 01:26:13.469126 | orchestrator | Saturday 28 March 2026 01:26:06 +0000 (0:00:00.798) 0:00:12.205 ******** 2026-03-28 01:26:13.469135 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:26:13.469145 | orchestrator | 2026-03-28 01:26:13.469154 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-28 01:26:13.469174 | orchestrator | Saturday 28 March 2026 01:26:08 +0000 (0:00:01.706) 0:00:13.912 ******** 2026-03-28 01:26:13.469183 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:13.469193 | orchestrator | 2026-03-28 01:26:13.469203 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-28 01:26:13.469212 | orchestrator | Saturday 28 March 2026 01:26:08 +0000 (0:00:00.141) 0:00:14.053 ******** 2026-03-28 01:26:13.469222 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:13.469231 | orchestrator | 2026-03-28 01:26:13.469241 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-28 01:26:13.469250 | orchestrator | Saturday 28 March 2026 01:26:08 +0000 (0:00:00.331) 0:00:14.385 ******** 2026-03-28 01:26:13.469259 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:13.469269 | orchestrator | 2026-03-28 01:26:13.469278 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-28 01:26:13.469288 | orchestrator | Saturday 28 March 2026 01:26:09 +0000 (0:00:00.182) 0:00:14.568 ******** 2026-03-28 01:26:13.469297 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:13.469307 | orchestrator | 2026-03-28 01:26:13.469316 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:26:13.469326 | orchestrator | Saturday 28 March 2026 01:26:09 +0000 (0:00:00.121) 0:00:14.690 ******** 2026-03-28 01:26:13.469335 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:13.469345 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:13.469354 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:13.469363 | orchestrator | 2026-03-28 01:26:13.469373 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-28 01:26:13.469382 | orchestrator | Saturday 28 March 2026 01:26:09 +0000 (0:00:00.305) 0:00:14.995 ******** 2026-03-28 01:26:13.469392 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:26:13.469401 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:26:13.469411 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:26:13.469420 | orchestrator | 2026-03-28 01:26:13.469430 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-28 01:26:13.469439 | orchestrator | Saturday 28 March 2026 01:26:12 +0000 (0:00:02.654) 0:00:17.649 ******** 2026-03-28 01:26:13.469449 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:13.469458 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:13.469468 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:13.469477 | orchestrator | 2026-03-28 01:26:13.469486 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-28 01:26:13.469496 | orchestrator | Saturday 28 March 2026 01:26:12 +0000 (0:00:00.358) 0:00:18.008 ******** 2026-03-28 01:26:13.469505 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:13.469515 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:13.469524 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:13.469533 | orchestrator | 2026-03-28 01:26:13.469543 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-28 01:26:13.469553 | orchestrator | Saturday 28 March 2026 01:26:13 +0000 (0:00:00.553) 0:00:18.561 ******** 2026-03-28 01:26:13.469562 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:13.469572 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:26:13.469581 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:26:13.469591 | orchestrator | 2026-03-28 01:26:13.469607 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-28 01:26:23.303241 | orchestrator | Saturday 28 March 2026 01:26:13 +0000 (0:00:00.362) 0:00:18.924 ******** 2026-03-28 01:26:23.303386 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:23.303404 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:23.303415 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:23.303426 | orchestrator | 2026-03-28 01:26:23.303446 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-28 01:26:23.303466 | orchestrator | Saturday 28 March 2026 01:26:14 +0000 (0:00:00.585) 0:00:19.509 ******** 2026-03-28 01:26:23.303487 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:23.303537 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:26:23.303558 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:26:23.303578 | orchestrator | 2026-03-28 01:26:23.303598 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-28 01:26:23.303618 | orchestrator | Saturday 28 March 2026 01:26:14 +0000 (0:00:00.343) 0:00:19.852 ******** 2026-03-28 01:26:23.303852 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:23.303877 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:26:23.303889 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:26:23.303902 | orchestrator | 2026-03-28 01:26:23.303924 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:26:23.303938 | orchestrator | Saturday 28 March 2026 01:26:14 +0000 (0:00:00.310) 0:00:20.163 ******** 2026-03-28 01:26:23.303950 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:23.303963 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:23.303975 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:23.303987 | orchestrator | 2026-03-28 01:26:23.303999 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-28 01:26:23.304012 | orchestrator | Saturday 28 March 2026 01:26:15 +0000 (0:00:00.541) 0:00:20.704 ******** 2026-03-28 01:26:23.304024 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:23.304037 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:23.304049 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:23.304061 | orchestrator | 2026-03-28 01:26:23.304073 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-28 01:26:23.304086 | orchestrator | Saturday 28 March 2026 01:26:16 +0000 (0:00:00.879) 0:00:21.583 ******** 2026-03-28 01:26:23.304098 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:23.304110 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:23.304122 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:23.304134 | orchestrator | 2026-03-28 01:26:23.304148 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-28 01:26:23.304160 | orchestrator | Saturday 28 March 2026 01:26:16 +0000 (0:00:00.332) 0:00:21.916 ******** 2026-03-28 01:26:23.304170 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:23.304181 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:26:23.304192 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:26:23.304202 | orchestrator | 2026-03-28 01:26:23.304213 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-28 01:26:23.304224 | orchestrator | Saturday 28 March 2026 01:26:16 +0000 (0:00:00.330) 0:00:22.246 ******** 2026-03-28 01:26:23.304234 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:26:23.304245 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:26:23.304256 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:26:23.304266 | orchestrator | 2026-03-28 01:26:23.304277 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-28 01:26:23.304288 | orchestrator | Saturday 28 March 2026 01:26:17 +0000 (0:00:00.567) 0:00:22.814 ******** 2026-03-28 01:26:23.304299 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:26:23.304310 | orchestrator | 2026-03-28 01:26:23.304321 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-28 01:26:23.304331 | orchestrator | Saturday 28 March 2026 01:26:17 +0000 (0:00:00.284) 0:00:23.099 ******** 2026-03-28 01:26:23.304342 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:26:23.304353 | orchestrator | 2026-03-28 01:26:23.304363 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:26:23.304374 | orchestrator | Saturday 28 March 2026 01:26:17 +0000 (0:00:00.266) 0:00:23.365 ******** 2026-03-28 01:26:23.304385 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:26:23.304395 | orchestrator | 2026-03-28 01:26:23.304406 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:26:23.304417 | orchestrator | Saturday 28 March 2026 01:26:19 +0000 (0:00:01.775) 0:00:25.141 ******** 2026-03-28 01:26:23.304427 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:26:23.304450 | orchestrator | 2026-03-28 01:26:23.304461 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:26:23.304471 | orchestrator | Saturday 28 March 2026 01:26:19 +0000 (0:00:00.291) 0:00:25.432 ******** 2026-03-28 01:26:23.304482 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:26:23.304492 | orchestrator | 2026-03-28 01:26:23.304503 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:26:23.304513 | orchestrator | Saturday 28 March 2026 01:26:20 +0000 (0:00:00.309) 0:00:25.742 ******** 2026-03-28 01:26:23.304524 | orchestrator | 2026-03-28 01:26:23.304535 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:26:23.304545 | orchestrator | Saturday 28 March 2026 01:26:20 +0000 (0:00:00.078) 0:00:25.820 ******** 2026-03-28 01:26:23.304556 | orchestrator | 2026-03-28 01:26:23.304566 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:26:23.304577 | orchestrator | Saturday 28 March 2026 01:26:20 +0000 (0:00:00.078) 0:00:25.899 ******** 2026-03-28 01:26:23.304588 | orchestrator | 2026-03-28 01:26:23.304598 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-28 01:26:23.304609 | orchestrator | Saturday 28 March 2026 01:26:20 +0000 (0:00:00.082) 0:00:25.982 ******** 2026-03-28 01:26:23.304620 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:26:23.304684 | orchestrator | 2026-03-28 01:26:23.304696 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:26:23.304708 | orchestrator | Saturday 28 March 2026 01:26:22 +0000 (0:00:01.757) 0:00:27.739 ******** 2026-03-28 01:26:23.304741 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-28 01:26:23.304753 | orchestrator |  "msg": [ 2026-03-28 01:26:23.304765 | orchestrator |  "Validator run completed.", 2026-03-28 01:26:23.304776 | orchestrator |  "You can find the report file here:", 2026-03-28 01:26:23.304787 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-28T01:25:55+00:00-report.json", 2026-03-28 01:26:23.304798 | orchestrator |  "on the following host:", 2026-03-28 01:26:23.304809 | orchestrator |  "testbed-manager" 2026-03-28 01:26:23.304820 | orchestrator |  ] 2026-03-28 01:26:23.304831 | orchestrator | } 2026-03-28 01:26:23.304842 | orchestrator | 2026-03-28 01:26:23.304853 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:26:23.304864 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 01:26:23.304882 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 01:26:23.304893 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 01:26:23.304904 | orchestrator | 2026-03-28 01:26:23.304915 | orchestrator | 2026-03-28 01:26:23.304925 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:26:23.304936 | orchestrator | Saturday 28 March 2026 01:26:22 +0000 (0:00:00.646) 0:00:28.386 ******** 2026-03-28 01:26:23.304947 | orchestrator | =============================================================================== 2026-03-28 01:26:23.304957 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.65s 2026-03-28 01:26:23.304968 | orchestrator | Aggregate test results step one ----------------------------------------- 1.78s 2026-03-28 01:26:23.304979 | orchestrator | Write report file ------------------------------------------------------- 1.76s 2026-03-28 01:26:23.304989 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.71s 2026-03-28 01:26:23.305000 | orchestrator | Get timestamp for report file ------------------------------------------- 0.94s 2026-03-28 01:26:23.305010 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.90s 2026-03-28 01:26:23.305028 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.88s 2026-03-28 01:26:23.305039 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2026-03-28 01:26:23.305050 | orchestrator | Aggregate test results step one ----------------------------------------- 0.82s 2026-03-28 01:26:23.305060 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.80s 2026-03-28 01:26:23.305071 | orchestrator | Print report file information ------------------------------------------- 0.65s 2026-03-28 01:26:23.305082 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.59s 2026-03-28 01:26:23.305092 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.57s 2026-03-28 01:26:23.305103 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.56s 2026-03-28 01:26:23.305113 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.55s 2026-03-28 01:26:23.305124 | orchestrator | Prepare test data ------------------------------------------------------- 0.54s 2026-03-28 01:26:23.305135 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.54s 2026-03-28 01:26:23.305145 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.51s 2026-03-28 01:26:23.305156 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.49s 2026-03-28 01:26:23.305166 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.36s 2026-03-28 01:26:23.670287 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-28 01:26:23.679965 | orchestrator | + set -e 2026-03-28 01:26:23.680090 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 01:26:23.680108 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 01:26:23.680120 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 01:26:23.680131 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 01:26:23.680141 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 01:26:23.680152 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 01:26:23.680163 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 01:26:23.680173 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 01:26:23.680185 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 01:26:23.680197 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 01:26:23.680208 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 01:26:23.680304 | orchestrator | ++ export ARA=false 2026-03-28 01:26:23.680315 | orchestrator | ++ ARA=false 2026-03-28 01:26:23.680325 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 01:26:23.680338 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 01:26:23.680352 | orchestrator | ++ export TEMPEST=true 2026-03-28 01:26:23.680362 | orchestrator | ++ TEMPEST=true 2026-03-28 01:26:23.680373 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 01:26:23.680383 | orchestrator | ++ IS_ZUUL=true 2026-03-28 01:26:23.680393 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.253 2026-03-28 01:26:23.680403 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.253 2026-03-28 01:26:23.680414 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 01:26:23.680424 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 01:26:23.680435 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 01:26:23.680445 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 01:26:23.680456 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 01:26:23.680466 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 01:26:23.680476 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 01:26:23.680485 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 01:26:23.680491 | orchestrator | + source /etc/os-release 2026-03-28 01:26:23.680498 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-28 01:26:23.680504 | orchestrator | ++ NAME=Ubuntu 2026-03-28 01:26:23.680510 | orchestrator | ++ VERSION_ID=24.04 2026-03-28 01:26:23.680517 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-28 01:26:23.680523 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-28 01:26:23.680529 | orchestrator | ++ ID=ubuntu 2026-03-28 01:26:23.680535 | orchestrator | ++ ID_LIKE=debian 2026-03-28 01:26:23.680541 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-28 01:26:23.680547 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-28 01:26:23.680554 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-28 01:26:23.680560 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-28 01:26:23.680589 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-28 01:26:23.680607 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-28 01:26:23.680617 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-28 01:26:23.680656 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-28 01:26:23.680670 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-28 01:26:23.703268 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-28 01:26:49.103129 | orchestrator | 2026-03-28 01:26:49.103248 | orchestrator | # Status of Elasticsearch 2026-03-28 01:26:49.103268 | orchestrator | 2026-03-28 01:26:49.103281 | orchestrator | + pushd /opt/configuration/contrib 2026-03-28 01:26:49.103294 | orchestrator | + echo 2026-03-28 01:26:49.103305 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-28 01:26:49.103316 | orchestrator | + echo 2026-03-28 01:26:49.103327 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-28 01:26:49.267195 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-28 01:26:49.267281 | orchestrator | 2026-03-28 01:26:49.267295 | orchestrator | # Status of MariaDB 2026-03-28 01:26:49.267309 | orchestrator | 2026-03-28 01:26:49.267319 | orchestrator | + echo 2026-03-28 01:26:49.267330 | orchestrator | + echo '# Status of MariaDB' 2026-03-28 01:26:49.267340 | orchestrator | + echo 2026-03-28 01:26:49.267809 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-28 01:26:49.318975 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 01:26:49.319068 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-28 01:26:49.319083 | orchestrator | + MARIADB_USER=root_shard_0 2026-03-28 01:26:49.319096 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-03-28 01:26:49.379325 | orchestrator | Reading package lists... 2026-03-28 01:26:49.741572 | orchestrator | Building dependency tree... 2026-03-28 01:26:49.742244 | orchestrator | Reading state information... 2026-03-28 01:26:50.178500 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-03-28 01:26:50.178599 | orchestrator | bc set to manually installed. 2026-03-28 01:26:50.178671 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-03-28 01:26:50.846939 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-03-28 01:26:50.847831 | orchestrator | 2026-03-28 01:26:50.847864 | orchestrator | # Status of Prometheus 2026-03-28 01:26:50.847877 | orchestrator | + echo 2026-03-28 01:26:50.847889 | orchestrator | + echo '# Status of Prometheus' 2026-03-28 01:26:50.847899 | orchestrator | + echo 2026-03-28 01:26:50.847909 | orchestrator | 2026-03-28 01:26:50.847920 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-28 01:26:50.919066 | orchestrator | Unauthorized 2026-03-28 01:26:50.922790 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-28 01:26:50.987023 | orchestrator | Unauthorized 2026-03-28 01:26:50.990342 | orchestrator | 2026-03-28 01:26:50.990391 | orchestrator | # Status of RabbitMQ 2026-03-28 01:26:50.990398 | orchestrator | 2026-03-28 01:26:50.990403 | orchestrator | + echo 2026-03-28 01:26:50.990407 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-28 01:26:50.990412 | orchestrator | + echo 2026-03-28 01:26:50.991044 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-28 01:26:51.045949 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 01:26:51.046100 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-28 01:26:51.046120 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-03-28 01:26:51.530590 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-03-28 01:26:51.539921 | orchestrator | 2026-03-28 01:26:51.540018 | orchestrator | # Status of Redis 2026-03-28 01:26:51.540034 | orchestrator | 2026-03-28 01:26:51.540046 | orchestrator | + echo 2026-03-28 01:26:51.540058 | orchestrator | + echo '# Status of Redis' 2026-03-28 01:26:51.540070 | orchestrator | + echo 2026-03-28 01:26:51.540082 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-28 01:26:51.544916 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002166s;;;0.000000;10.000000 2026-03-28 01:26:51.544970 | orchestrator | + popd 2026-03-28 01:26:51.544991 | orchestrator | 2026-03-28 01:26:51.545020 | orchestrator | # Create backup of MariaDB database 2026-03-28 01:26:51.545040 | orchestrator | 2026-03-28 01:26:51.545058 | orchestrator | + echo 2026-03-28 01:26:51.545078 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-28 01:26:51.545089 | orchestrator | + echo 2026-03-28 01:26:51.545101 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-28 01:26:53.619021 | orchestrator | 2026-03-28 01:26:53 | INFO  | Task d6661296-45da-4f91-8201-43a510c6c1d9 (mariadb_backup) was prepared for execution. 2026-03-28 01:26:53.619133 | orchestrator | 2026-03-28 01:26:53 | INFO  | It takes a moment until task d6661296-45da-4f91-8201-43a510c6c1d9 (mariadb_backup) has been started and output is visible here. 2026-03-28 01:27:21.822536 | orchestrator | 2026-03-28 01:27:21.822653 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:27:21.822661 | orchestrator | 2026-03-28 01:27:21.822666 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:27:21.822671 | orchestrator | Saturday 28 March 2026 01:26:58 +0000 (0:00:00.198) 0:00:00.198 ******** 2026-03-28 01:27:21.822676 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:27:21.822681 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:27:21.822684 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:27:21.822688 | orchestrator | 2026-03-28 01:27:21.822692 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:27:21.822696 | orchestrator | Saturday 28 March 2026 01:26:58 +0000 (0:00:00.327) 0:00:00.526 ******** 2026-03-28 01:27:21.822700 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-28 01:27:21.822705 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-28 01:27:21.822709 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-28 01:27:21.822713 | orchestrator | 2026-03-28 01:27:21.822716 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-28 01:27:21.822720 | orchestrator | 2026-03-28 01:27:21.822724 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-28 01:27:21.822728 | orchestrator | Saturday 28 March 2026 01:26:59 +0000 (0:00:00.636) 0:00:01.162 ******** 2026-03-28 01:27:21.822732 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 01:27:21.822736 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 01:27:21.822739 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 01:27:21.822743 | orchestrator | 2026-03-28 01:27:21.822747 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 01:27:21.822751 | orchestrator | Saturday 28 March 2026 01:26:59 +0000 (0:00:00.530) 0:00:01.692 ******** 2026-03-28 01:27:21.822755 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:27:21.822759 | orchestrator | 2026-03-28 01:27:21.822763 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-28 01:27:21.822767 | orchestrator | Saturday 28 March 2026 01:27:00 +0000 (0:00:00.553) 0:00:02.246 ******** 2026-03-28 01:27:21.822771 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:27:21.822774 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:27:21.822778 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:27:21.822782 | orchestrator | 2026-03-28 01:27:21.822786 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-28 01:27:21.822789 | orchestrator | Saturday 28 March 2026 01:27:03 +0000 (0:00:03.318) 0:00:05.564 ******** 2026-03-28 01:27:21.822793 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-28 01:27:21.822797 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-28 01:27:21.822802 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-28 01:27:21.822819 | orchestrator | mariadb_bootstrap_restart 2026-03-28 01:27:21.822823 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:27:21.822827 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:27:21.822831 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:27:21.822835 | orchestrator | 2026-03-28 01:27:21.822838 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-28 01:27:21.822842 | orchestrator | skipping: no hosts matched 2026-03-28 01:27:21.822846 | orchestrator | 2026-03-28 01:27:21.822849 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-28 01:27:21.822853 | orchestrator | skipping: no hosts matched 2026-03-28 01:27:21.822857 | orchestrator | 2026-03-28 01:27:21.822860 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-28 01:27:21.822864 | orchestrator | skipping: no hosts matched 2026-03-28 01:27:21.822868 | orchestrator | 2026-03-28 01:27:21.822871 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-28 01:27:21.822875 | orchestrator | 2026-03-28 01:27:21.822879 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-28 01:27:21.822882 | orchestrator | Saturday 28 March 2026 01:27:20 +0000 (0:00:17.267) 0:00:22.832 ******** 2026-03-28 01:27:21.822886 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:27:21.822890 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:27:21.822893 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:27:21.822897 | orchestrator | 2026-03-28 01:27:21.822901 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-28 01:27:21.822904 | orchestrator | Saturday 28 March 2026 01:27:21 +0000 (0:00:00.332) 0:00:23.164 ******** 2026-03-28 01:27:21.822908 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:27:21.822912 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:27:21.822915 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:27:21.822919 | orchestrator | 2026-03-28 01:27:21.822923 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:27:21.822927 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:27:21.822932 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 01:27:21.822936 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 01:27:21.822940 | orchestrator | 2026-03-28 01:27:21.822943 | orchestrator | 2026-03-28 01:27:21.822947 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:27:21.822951 | orchestrator | Saturday 28 March 2026 01:27:21 +0000 (0:00:00.413) 0:00:23.578 ******** 2026-03-28 01:27:21.822954 | orchestrator | =============================================================================== 2026-03-28 01:27:21.822958 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.27s 2026-03-28 01:27:21.822971 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.32s 2026-03-28 01:27:21.822976 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-03-28 01:27:21.822979 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.55s 2026-03-28 01:27:21.822983 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.53s 2026-03-28 01:27:21.822987 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.41s 2026-03-28 01:27:21.822990 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.33s 2026-03-28 01:27:21.822994 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-28 01:27:22.216189 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-28 01:27:22.223440 | orchestrator | + set -e 2026-03-28 01:27:22.223529 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 01:27:22.224206 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 01:27:22.224308 | orchestrator | ++ INTERACTIVE=false 2026-03-28 01:27:22.224323 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 01:27:22.224334 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 01:27:22.224346 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 01:27:22.226234 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 01:27:22.230340 | orchestrator | 2026-03-28 01:27:22.230404 | orchestrator | # OpenStack endpoints 2026-03-28 01:27:22.230415 | orchestrator | 2026-03-28 01:27:22.230425 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 01:27:22.230434 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 01:27:22.230443 | orchestrator | + export OS_CLOUD=admin 2026-03-28 01:27:22.230451 | orchestrator | + OS_CLOUD=admin 2026-03-28 01:27:22.230459 | orchestrator | + echo 2026-03-28 01:27:22.230467 | orchestrator | + echo '# OpenStack endpoints' 2026-03-28 01:27:22.230475 | orchestrator | + echo 2026-03-28 01:27:22.230483 | orchestrator | + openstack endpoint list 2026-03-28 01:27:25.853496 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-28 01:27:25.853681 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-28 01:27:25.853712 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-28 01:27:25.853733 | orchestrator | | 0b27540a9a9d4376bb84e5c05084e922 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-28 01:27:25.853753 | orchestrator | | 2c77c07b03e94fb4a241590318df061a | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-28 01:27:25.853778 | orchestrator | | 2eaf6d47459348ec87591772829cfe72 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-28 01:27:25.853798 | orchestrator | | 35c31943cbd247b5b50891c6cae23906 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-28 01:27:25.853817 | orchestrator | | 3a613fe65c5b43eaa1088bd01a2623c2 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-28 01:27:25.853836 | orchestrator | | 460c8ee636ab4bf1b257f76e943fc801 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-28 01:27:25.853864 | orchestrator | | 4932f88bb63f48b19b14f63baac4c6b5 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-28 01:27:25.853885 | orchestrator | | 52a5503a6f8d483a8db79dea645db4f5 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-28 01:27:25.853903 | orchestrator | | 5b7a31a9d0af40aa8967ec4daad76719 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-28 01:27:25.853921 | orchestrator | | 5db163ba1d7d42c9821ab29ef58ef6de | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-28 01:27:25.853938 | orchestrator | | 6022b39e6d494103b42311e4dc266214 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-28 01:27:25.853956 | orchestrator | | 685021bfb65e4e2ebd34aa6d26174e40 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-28 01:27:25.853975 | orchestrator | | 692ac763b2eb4ac48850258b18ffb7b5 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-28 01:27:25.854085 | orchestrator | | 6e8acb73d8ae4339813ed9373c468c01 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-28 01:27:25.854111 | orchestrator | | 7822739a774a400fab158ed8c9b882bb | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-28 01:27:25.854127 | orchestrator | | 84ff8c79c3d4421aacfaafe7a68f1e14 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-28 01:27:25.854139 | orchestrator | | 96e22234a45a4c068d438d0a9a482568 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-28 01:27:25.854150 | orchestrator | | ae3122dbd32942c889913e0029756d80 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-28 01:27:25.854161 | orchestrator | | be19d8ddc0864adb8bbf5d72eb03f68b | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-28 01:27:25.854171 | orchestrator | | cce9f44b9f4b4d048043690bbb0a7245 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-28 01:27:25.854206 | orchestrator | | d4e94eb3b3434ff4a8568b82031d34fa | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-28 01:27:25.854218 | orchestrator | | ecc1926cd85a4455b2f09e9ea3048b80 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-28 01:27:25.854229 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-28 01:27:26.120441 | orchestrator | 2026-03-28 01:27:26.120509 | orchestrator | # Cinder 2026-03-28 01:27:26.120515 | orchestrator | 2026-03-28 01:27:26.120520 | orchestrator | + echo 2026-03-28 01:27:26.120524 | orchestrator | + echo '# Cinder' 2026-03-28 01:27:26.120529 | orchestrator | + echo 2026-03-28 01:27:26.120533 | orchestrator | + openstack volume service list 2026-03-28 01:27:29.323047 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-28 01:27:29.323146 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-28 01:27:29.323158 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-28 01:27:29.323188 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-28T01:27:20.000000 | 2026-03-28 01:27:29.323204 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-28T01:27:19.000000 | 2026-03-28 01:27:29.323219 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-28T01:27:19.000000 | 2026-03-28 01:27:29.323233 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-28T01:27:19.000000 | 2026-03-28 01:27:29.323244 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-28T01:27:26.000000 | 2026-03-28 01:27:29.323256 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-28T01:27:27.000000 | 2026-03-28 01:27:29.323269 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-28T01:27:20.000000 | 2026-03-28 01:27:29.323283 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-28T01:27:22.000000 | 2026-03-28 01:27:29.323296 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-28T01:27:22.000000 | 2026-03-28 01:27:29.323311 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-28 01:27:29.641574 | orchestrator | 2026-03-28 01:27:29.641732 | orchestrator | # Neutron 2026-03-28 01:27:29.641747 | orchestrator | 2026-03-28 01:27:29.641759 | orchestrator | + echo 2026-03-28 01:27:29.641770 | orchestrator | + echo '# Neutron' 2026-03-28 01:27:29.641784 | orchestrator | + echo 2026-03-28 01:27:29.641795 | orchestrator | + openstack network agent list 2026-03-28 01:27:32.517361 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-28 01:27:32.517460 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-28 01:27:32.517475 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-28 01:27:32.517487 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-28 01:27:32.517498 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-28 01:27:32.517509 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-28 01:27:32.517520 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-28 01:27:32.517530 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-28 01:27:32.517541 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-28 01:27:32.517552 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-28 01:27:32.517563 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-28 01:27:32.517574 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-28 01:27:32.517636 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-28 01:27:32.838760 | orchestrator | + openstack network service provider list 2026-03-28 01:27:35.464033 | orchestrator | +---------------+------+---------+ 2026-03-28 01:27:35.464160 | orchestrator | | Service Type | Name | Default | 2026-03-28 01:27:35.464188 | orchestrator | +---------------+------+---------+ 2026-03-28 01:27:35.464208 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-28 01:27:35.464227 | orchestrator | +---------------+------+---------+ 2026-03-28 01:27:35.762175 | orchestrator | 2026-03-28 01:27:35.762272 | orchestrator | # Nova 2026-03-28 01:27:35.762284 | orchestrator | 2026-03-28 01:27:35.762293 | orchestrator | + echo 2026-03-28 01:27:35.762303 | orchestrator | + echo '# Nova' 2026-03-28 01:27:35.762312 | orchestrator | + echo 2026-03-28 01:27:35.762322 | orchestrator | + openstack compute service list 2026-03-28 01:27:38.598539 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-28 01:27:38.598696 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-28 01:27:38.598715 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-28 01:27:38.598730 | orchestrator | | 4c89531d-0023-44b2-a9f4-394bf58b0a6b | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-28T01:27:30.000000 | 2026-03-28 01:27:38.598744 | orchestrator | | af1c0789-563a-4ef4-87bb-22805e042086 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-28T01:27:35.000000 | 2026-03-28 01:27:38.598798 | orchestrator | | 936bcf01-c28c-457a-89ae-c8ec562ba5d5 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-28T01:27:35.000000 | 2026-03-28 01:27:38.598808 | orchestrator | | cb2d7262-c1ca-481c-954d-87f664701e05 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-28T01:27:35.000000 | 2026-03-28 01:27:38.598816 | orchestrator | | b46be488-fb3f-40c2-81be-9dbc28786bc2 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-28T01:27:37.000000 | 2026-03-28 01:27:38.598824 | orchestrator | | 75ce2af5-8f6d-404f-9f42-ebcc3cd6c43f | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-28T01:27:37.000000 | 2026-03-28 01:27:38.598832 | orchestrator | | 0e7ed47a-084e-42cc-9e84-27a4fef56dbf | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-28T01:27:33.000000 | 2026-03-28 01:27:38.598840 | orchestrator | | 8befab92-cf1e-4893-ae1b-d5fab9be364e | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-28T01:27:34.000000 | 2026-03-28 01:27:38.598847 | orchestrator | | eae3056d-bda0-4143-b458-9b5dad7ffcb7 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-28T01:27:34.000000 | 2026-03-28 01:27:38.598855 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-28 01:27:38.882192 | orchestrator | + openstack hypervisor list 2026-03-28 01:27:41.580917 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-28 01:27:41.581061 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-28 01:27:41.581102 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-28 01:27:41.581115 | orchestrator | | aac77065-e567-42b4-a5a6-a5d7c77a244f | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-28 01:27:41.581126 | orchestrator | | 6891aa62-b081-4387-ad62-de2b1555fdb3 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-28 01:27:41.581137 | orchestrator | | c43f58a9-e4d0-42fd-b72e-8dcb33203d23 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-28 01:27:41.581148 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-28 01:27:41.868691 | orchestrator | 2026-03-28 01:27:41.868820 | orchestrator | # Run OpenStack test play 2026-03-28 01:27:41.868852 | orchestrator | 2026-03-28 01:27:41.868874 | orchestrator | + echo 2026-03-28 01:27:41.868895 | orchestrator | + echo '# Run OpenStack test play' 2026-03-28 01:27:41.868915 | orchestrator | + echo 2026-03-28 01:27:41.868935 | orchestrator | + osism apply --environment openstack test 2026-03-28 01:27:43.886290 | orchestrator | 2026-03-28 01:27:43 | INFO  | Trying to run play test in environment openstack 2026-03-28 01:27:54.076129 | orchestrator | 2026-03-28 01:27:54 | INFO  | Task 7b0d4443-ed34-4214-a59b-a6c300b2a49a (test) was prepared for execution. 2026-03-28 01:27:54.076240 | orchestrator | 2026-03-28 01:27:54 | INFO  | It takes a moment until task 7b0d4443-ed34-4214-a59b-a6c300b2a49a (test) has been started and output is visible here. 2026-03-28 01:30:45.814439 | orchestrator | 2026-03-28 01:30:45.815450 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-28 01:30:45.815588 | orchestrator | 2026-03-28 01:30:45.815610 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-28 01:30:45.815623 | orchestrator | Saturday 28 March 2026 01:27:58 +0000 (0:00:00.080) 0:00:00.080 ******** 2026-03-28 01:30:45.815635 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.815647 | orchestrator | 2026-03-28 01:30:45.815658 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-28 01:30:45.815669 | orchestrator | Saturday 28 March 2026 01:28:02 +0000 (0:00:03.696) 0:00:03.776 ******** 2026-03-28 01:30:45.815680 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.815691 | orchestrator | 2026-03-28 01:30:45.815702 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-28 01:30:45.815738 | orchestrator | Saturday 28 March 2026 01:28:06 +0000 (0:00:04.214) 0:00:07.991 ******** 2026-03-28 01:30:45.815756 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.815855 | orchestrator | 2026-03-28 01:30:45.815876 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-28 01:30:45.815896 | orchestrator | Saturday 28 March 2026 01:28:13 +0000 (0:00:06.818) 0:00:14.810 ******** 2026-03-28 01:30:45.815913 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.815933 | orchestrator | 2026-03-28 01:30:45.815954 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-28 01:30:45.815974 | orchestrator | Saturday 28 March 2026 01:28:17 +0000 (0:00:04.197) 0:00:19.008 ******** 2026-03-28 01:30:45.815996 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.816017 | orchestrator | 2026-03-28 01:30:45.816037 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-28 01:30:45.816058 | orchestrator | Saturday 28 March 2026 01:28:21 +0000 (0:00:04.676) 0:00:23.685 ******** 2026-03-28 01:30:45.816079 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-28 01:30:45.816101 | orchestrator | changed: [localhost] => (item=member) 2026-03-28 01:30:45.816115 | orchestrator | changed: [localhost] => (item=creator) 2026-03-28 01:30:45.816127 | orchestrator | 2026-03-28 01:30:45.816145 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-28 01:30:45.816165 | orchestrator | Saturday 28 March 2026 01:28:34 +0000 (0:00:12.097) 0:00:35.783 ******** 2026-03-28 01:30:45.816181 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.816199 | orchestrator | 2026-03-28 01:30:45.816210 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-28 01:30:45.816221 | orchestrator | Saturday 28 March 2026 01:28:38 +0000 (0:00:04.374) 0:00:40.157 ******** 2026-03-28 01:30:45.816247 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.816258 | orchestrator | 2026-03-28 01:30:45.816269 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-28 01:30:45.816279 | orchestrator | Saturday 28 March 2026 01:28:43 +0000 (0:00:05.040) 0:00:45.197 ******** 2026-03-28 01:30:45.816290 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.816301 | orchestrator | 2026-03-28 01:30:45.816312 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-28 01:30:45.816322 | orchestrator | Saturday 28 March 2026 01:28:48 +0000 (0:00:04.568) 0:00:49.766 ******** 2026-03-28 01:30:45.816333 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.816344 | orchestrator | 2026-03-28 01:30:45.816355 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-28 01:30:45.816365 | orchestrator | Saturday 28 March 2026 01:28:52 +0000 (0:00:04.493) 0:00:54.259 ******** 2026-03-28 01:30:45.816376 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.816387 | orchestrator | 2026-03-28 01:30:45.816398 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-28 01:30:45.816409 | orchestrator | Saturday 28 March 2026 01:28:56 +0000 (0:00:04.319) 0:00:58.578 ******** 2026-03-28 01:30:45.816420 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.816431 | orchestrator | 2026-03-28 01:30:45.816442 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-28 01:30:45.816453 | orchestrator | Saturday 28 March 2026 01:29:01 +0000 (0:00:04.236) 0:01:02.815 ******** 2026-03-28 01:30:45.816464 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.816501 | orchestrator | 2026-03-28 01:30:45.816516 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-28 01:30:45.816527 | orchestrator | Saturday 28 March 2026 01:29:06 +0000 (0:00:05.088) 0:01:07.904 ******** 2026-03-28 01:30:45.816538 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.816548 | orchestrator | 2026-03-28 01:30:45.816559 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-28 01:30:45.816570 | orchestrator | Saturday 28 March 2026 01:29:12 +0000 (0:00:05.936) 0:01:13.840 ******** 2026-03-28 01:30:45.816580 | orchestrator | changed: [localhost] 2026-03-28 01:30:45.816604 | orchestrator | 2026-03-28 01:30:45.816615 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-28 01:30:45.816626 | orchestrator | 2026-03-28 01:30:45.816636 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-28 01:30:45.816647 | orchestrator | Saturday 28 March 2026 01:29:23 +0000 (0:00:11.367) 0:01:25.208 ******** 2026-03-28 01:30:45.816658 | orchestrator | ok: [localhost] 2026-03-28 01:30:45.816669 | orchestrator | 2026-03-28 01:30:45.816680 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-28 01:30:45.816696 | orchestrator | Saturday 28 March 2026 01:29:27 +0000 (0:00:03.748) 0:01:28.956 ******** 2026-03-28 01:30:45.816707 | orchestrator | skipping: [localhost] 2026-03-28 01:30:45.816718 | orchestrator | 2026-03-28 01:30:45.816729 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-28 01:30:45.816740 | orchestrator | Saturday 28 March 2026 01:29:27 +0000 (0:00:00.047) 0:01:29.003 ******** 2026-03-28 01:30:45.816750 | orchestrator | skipping: [localhost] 2026-03-28 01:30:45.816761 | orchestrator | 2026-03-28 01:30:45.816771 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-28 01:30:45.816782 | orchestrator | Saturday 28 March 2026 01:29:27 +0000 (0:00:00.061) 0:01:29.065 ******** 2026-03-28 01:30:45.816793 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-28 01:30:45.816804 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-28 01:30:45.816839 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-28 01:30:45.816851 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-28 01:30:45.816862 | orchestrator | skipping: [localhost] => (item=test)  2026-03-28 01:30:45.816872 | orchestrator | skipping: [localhost] 2026-03-28 01:30:45.816883 | orchestrator | 2026-03-28 01:30:45.816894 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-28 01:30:45.816905 | orchestrator | Saturday 28 March 2026 01:29:27 +0000 (0:00:00.183) 0:01:29.248 ******** 2026-03-28 01:30:45.816915 | orchestrator | skipping: [localhost] 2026-03-28 01:30:45.817001 | orchestrator | 2026-03-28 01:30:45.817021 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-28 01:30:45.817041 | orchestrator | Saturday 28 March 2026 01:29:27 +0000 (0:00:00.160) 0:01:29.409 ******** 2026-03-28 01:30:45.817061 | orchestrator | changed: [localhost] => (item=test) 2026-03-28 01:30:45.817081 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-28 01:30:45.817100 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-28 01:30:45.817119 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-28 01:30:45.817138 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-28 01:30:45.817155 | orchestrator | 2026-03-28 01:30:45.817174 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-28 01:30:45.817193 | orchestrator | Saturday 28 March 2026 01:29:32 +0000 (0:00:04.941) 0:01:34.350 ******** 2026-03-28 01:30:45.817213 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-28 01:30:45.817235 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-28 01:30:45.817254 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-28 01:30:45.817274 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-28 01:30:45.817293 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-03-28 01:30:45.817373 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j370830196941.2663', 'results_file': '/ansible/.ansible_async/j370830196941.2663', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:45.817430 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j213606460123.2688', 'results_file': '/ansible/.ansible_async/j213606460123.2688', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:45.817469 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j650784504158.2713', 'results_file': '/ansible/.ansible_async/j650784504158.2713', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:45.817564 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j428404583524.2738', 'results_file': '/ansible/.ansible_async/j428404583524.2738', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:45.817577 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j932550241691.2763', 'results_file': '/ansible/.ansible_async/j932550241691.2763', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:45.817588 | orchestrator | 2026-03-28 01:30:45.817600 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-28 01:30:45.817611 | orchestrator | Saturday 28 March 2026 01:30:31 +0000 (0:00:58.433) 0:02:32.783 ******** 2026-03-28 01:30:45.817621 | orchestrator | changed: [localhost] => (item=test) 2026-03-28 01:30:45.817633 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-28 01:30:45.817644 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-28 01:30:45.817654 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-28 01:30:45.817665 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-28 01:30:45.817676 | orchestrator | 2026-03-28 01:30:45.817686 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-28 01:30:45.817697 | orchestrator | Saturday 28 March 2026 01:30:36 +0000 (0:00:05.000) 0:02:37.784 ******** 2026-03-28 01:30:45.817708 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-28 01:30:45.817732 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j801252148983.2874', 'results_file': '/ansible/.ansible_async/j801252148983.2874', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:45.817744 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j56443389599.2899', 'results_file': '/ansible/.ansible_async/j56443389599.2899', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:45.817755 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j267854663375.2924', 'results_file': '/ansible/.ansible_async/j267854663375.2924', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:45.817783 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j790252030372.2949', 'results_file': '/ansible/.ansible_async/j790252030372.2949', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-28 01:31:28.440891 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j808952762702.2974', 'results_file': '/ansible/.ansible_async/j808952762702.2974', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-28 01:31:28.440992 | orchestrator | 2026-03-28 01:31:28.441007 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-28 01:31:28.441019 | orchestrator | Saturday 28 March 2026 01:30:45 +0000 (0:00:09.718) 0:02:47.502 ******** 2026-03-28 01:31:28.441030 | orchestrator | changed: [localhost] => (item=test) 2026-03-28 01:31:28.441041 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-28 01:31:28.441051 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-28 01:31:28.441061 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-28 01:31:28.441072 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-28 01:31:28.441082 | orchestrator | 2026-03-28 01:31:28.441092 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-28 01:31:28.441124 | orchestrator | Saturday 28 March 2026 01:30:51 +0000 (0:00:05.902) 0:02:53.405 ******** 2026-03-28 01:31:28.441134 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-28 01:31:28.441145 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j131312829176.3043', 'results_file': '/ansible/.ansible_async/j131312829176.3043', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-28 01:31:28.441156 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j789141468664.3068', 'results_file': '/ansible/.ansible_async/j789141468664.3068', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-28 01:31:28.441180 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j55643089962.3094', 'results_file': '/ansible/.ansible_async/j55643089962.3094', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-28 01:31:28.441190 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j971697288631.3120', 'results_file': '/ansible/.ansible_async/j971697288631.3120', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-28 01:31:28.441199 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j947924369857.3146', 'results_file': '/ansible/.ansible_async/j947924369857.3146', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-28 01:31:28.441209 | orchestrator | 2026-03-28 01:31:28.441219 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-28 01:31:28.441228 | orchestrator | Saturday 28 March 2026 01:31:02 +0000 (0:00:10.745) 0:03:04.150 ******** 2026-03-28 01:31:28.441237 | orchestrator | changed: [localhost] 2026-03-28 01:31:28.441247 | orchestrator | 2026-03-28 01:31:28.441257 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-28 01:31:28.441266 | orchestrator | Saturday 28 March 2026 01:31:09 +0000 (0:00:06.720) 0:03:10.871 ******** 2026-03-28 01:31:28.441276 | orchestrator | changed: [localhost] 2026-03-28 01:31:28.441285 | orchestrator | 2026-03-28 01:31:28.441294 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-28 01:31:28.441304 | orchestrator | Saturday 28 March 2026 01:31:22 +0000 (0:00:13.589) 0:03:24.460 ******** 2026-03-28 01:31:28.441314 | orchestrator | ok: [localhost] 2026-03-28 01:31:28.441324 | orchestrator | 2026-03-28 01:31:28.441333 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-28 01:31:28.441343 | orchestrator | Saturday 28 March 2026 01:31:28 +0000 (0:00:05.339) 0:03:29.800 ******** 2026-03-28 01:31:28.441352 | orchestrator | ok: [localhost] => { 2026-03-28 01:31:28.441362 | orchestrator |  "msg": "192.168.112.173" 2026-03-28 01:31:28.441371 | orchestrator | } 2026-03-28 01:31:28.441381 | orchestrator | 2026-03-28 01:31:28.441391 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:31:28.441401 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 01:31:28.441411 | orchestrator | 2026-03-28 01:31:28.441421 | orchestrator | 2026-03-28 01:31:28.441432 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:31:28.441443 | orchestrator | Saturday 28 March 2026 01:31:28 +0000 (0:00:00.043) 0:03:29.843 ******** 2026-03-28 01:31:28.441455 | orchestrator | =============================================================================== 2026-03-28 01:31:28.441465 | orchestrator | Wait for instance creation to complete --------------------------------- 58.43s 2026-03-28 01:31:28.441499 | orchestrator | Attach test volume ----------------------------------------------------- 13.59s 2026-03-28 01:31:28.441511 | orchestrator | Add member roles to user test ------------------------------------------ 12.10s 2026-03-28 01:31:28.441522 | orchestrator | Create test router ----------------------------------------------------- 11.37s 2026-03-28 01:31:28.441543 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.75s 2026-03-28 01:31:28.441553 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.72s 2026-03-28 01:31:28.441563 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.82s 2026-03-28 01:31:28.441588 | orchestrator | Create test volume ------------------------------------------------------ 6.72s 2026-03-28 01:31:28.441598 | orchestrator | Create test subnet ------------------------------------------------------ 5.94s 2026-03-28 01:31:28.441607 | orchestrator | Add tag to instances ---------------------------------------------------- 5.90s 2026-03-28 01:31:28.441617 | orchestrator | Create floating ip address ---------------------------------------------- 5.34s 2026-03-28 01:31:28.441626 | orchestrator | Create test network ----------------------------------------------------- 5.09s 2026-03-28 01:31:28.441635 | orchestrator | Create ssh security group ----------------------------------------------- 5.04s 2026-03-28 01:31:28.441645 | orchestrator | Add metadata to instances ----------------------------------------------- 5.00s 2026-03-28 01:31:28.441654 | orchestrator | Create test instances --------------------------------------------------- 4.94s 2026-03-28 01:31:28.441664 | orchestrator | Create test user -------------------------------------------------------- 4.68s 2026-03-28 01:31:28.441673 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.57s 2026-03-28 01:31:28.441683 | orchestrator | Create icmp security group ---------------------------------------------- 4.49s 2026-03-28 01:31:28.441692 | orchestrator | Create test server group ------------------------------------------------ 4.37s 2026-03-28 01:31:28.441722 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.32s 2026-03-28 01:31:28.842752 | orchestrator | + server_list 2026-03-28 01:31:28.842836 | orchestrator | + openstack --os-cloud test server list 2026-03-28 01:31:32.901734 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-28 01:31:32.901866 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-28 01:31:32.901884 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-28 01:31:32.901915 | orchestrator | | 364e37d0-ba2d-415a-a441-6d2fcc50060b | test-4 | ACTIVE | test=192.168.112.126, 192.168.200.135 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:31:32.901927 | orchestrator | | 36eb161b-c6a0-4293-80c9-7c0cf5a64214 | test-3 | ACTIVE | test=192.168.112.149, 192.168.200.246 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:31:32.901939 | orchestrator | | 6f38b9e3-0e68-45b6-9874-1149e648de6f | test-2 | ACTIVE | test=192.168.112.200, 192.168.200.232 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:31:32.901951 | orchestrator | | 0be8eced-24ca-4856-b9b3-b7808bdc0d34 | test | ACTIVE | test=192.168.112.173, 192.168.200.57 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:31:32.901963 | orchestrator | | aa552505-8b5a-4b2d-84f7-40c7f6ff7bd3 | test-1 | ACTIVE | test=192.168.112.131, 192.168.200.172 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:31:32.901975 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-28 01:31:33.212374 | orchestrator | + openstack --os-cloud test server show test 2026-03-28 01:31:36.543189 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:36.543312 | orchestrator | | Field | Value | 2026-03-28 01:31:36.543350 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:36.543363 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:31:36.543375 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:31:36.543386 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:31:36.543398 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-28 01:31:36.543409 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:31:36.543426 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:31:36.543458 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:31:36.543470 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:31:36.543566 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:31:36.543582 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:31:36.543596 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:31:36.543609 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:31:36.543622 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:31:36.543635 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:31:36.543653 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:31:36.543667 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:30:05.000000 | 2026-03-28 01:31:36.543689 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:31:36.543709 | orchestrator | | accessIPv4 | | 2026-03-28 01:31:36.543723 | orchestrator | | accessIPv6 | | 2026-03-28 01:31:36.543737 | orchestrator | | addresses | test=192.168.112.173, 192.168.200.57 | 2026-03-28 01:31:36.543748 | orchestrator | | config_drive | | 2026-03-28 01:31:36.543759 | orchestrator | | created | 2026-03-28T01:29:37Z | 2026-03-28 01:31:36.543771 | orchestrator | | description | None | 2026-03-28 01:31:36.543782 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:31:36.543793 | orchestrator | | hostId | 202feeb9cba21947919e9f9180bcba5e0a1a64f68a57680750692835 | 2026-03-28 01:31:36.543804 | orchestrator | | host_status | None | 2026-03-28 01:31:36.543823 | orchestrator | | id | 0be8eced-24ca-4856-b9b3-b7808bdc0d34 | 2026-03-28 01:31:36.543840 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:31:36.543852 | orchestrator | | key_name | test | 2026-03-28 01:31:36.543863 | orchestrator | | locked | False | 2026-03-28 01:31:36.543880 | orchestrator | | locked_reason | None | 2026-03-28 01:31:36.543892 | orchestrator | | name | test | 2026-03-28 01:31:36.543903 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:31:36.543914 | orchestrator | | progress | 0 | 2026-03-28 01:31:36.543930 | orchestrator | | project_id | 4ad906bf0518444bba7667da1a6ac721 | 2026-03-28 01:31:36.543941 | orchestrator | | properties | hostname='test' | 2026-03-28 01:31:36.544145 | orchestrator | | security_groups | name='ssh' | 2026-03-28 01:31:36.544174 | orchestrator | | | name='icmp' | 2026-03-28 01:31:36.544195 | orchestrator | | server_groups | None | 2026-03-28 01:31:36.544216 | orchestrator | | status | ACTIVE | 2026-03-28 01:31:36.544238 | orchestrator | | tags | test | 2026-03-28 01:31:36.544258 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:31:36.544279 | orchestrator | | updated | 2026-03-28T01:30:37Z | 2026-03-28 01:31:36.544298 | orchestrator | | user_id | 3aa6d50047b549deb3453cfcc2d40626 | 2026-03-28 01:31:36.544323 | orchestrator | | volumes_attached | delete_on_termination='True', id='64d8486f-1bba-40b8-925d-6d0840008355' | 2026-03-28 01:31:36.544343 | orchestrator | | | delete_on_termination='False', id='9d98eb64-a2b5-47f7-83da-f36a477710f1' | 2026-03-28 01:31:36.544363 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:36.863148 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-28 01:31:39.928255 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:39.928430 | orchestrator | | Field | Value | 2026-03-28 01:31:39.928454 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:39.928466 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:31:39.928478 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:31:39.928517 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:31:39.928529 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-28 01:31:39.928605 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:31:39.928620 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:31:39.928651 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:31:39.928664 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:31:39.928675 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:31:39.928686 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:31:39.928697 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:31:39.928708 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:31:39.928719 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:31:39.928738 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:31:39.928755 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:31:39.928767 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:30:05.000000 | 2026-03-28 01:31:39.928798 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:31:39.928820 | orchestrator | | accessIPv4 | | 2026-03-28 01:31:39.928847 | orchestrator | | accessIPv6 | | 2026-03-28 01:31:39.928870 | orchestrator | | addresses | test=192.168.112.131, 192.168.200.172 | 2026-03-28 01:31:39.928890 | orchestrator | | config_drive | | 2026-03-28 01:31:39.928911 | orchestrator | | created | 2026-03-28T01:29:37Z | 2026-03-28 01:31:39.928929 | orchestrator | | description | None | 2026-03-28 01:31:39.928950 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:31:39.928967 | orchestrator | | hostId | 202feeb9cba21947919e9f9180bcba5e0a1a64f68a57680750692835 | 2026-03-28 01:31:39.928979 | orchestrator | | host_status | None | 2026-03-28 01:31:39.929000 | orchestrator | | id | aa552505-8b5a-4b2d-84f7-40c7f6ff7bd3 | 2026-03-28 01:31:39.929012 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:31:39.929023 | orchestrator | | key_name | test | 2026-03-28 01:31:39.929033 | orchestrator | | locked | False | 2026-03-28 01:31:39.929044 | orchestrator | | locked_reason | None | 2026-03-28 01:31:39.929055 | orchestrator | | name | test-1 | 2026-03-28 01:31:39.929074 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:31:39.929088 | orchestrator | | progress | 0 | 2026-03-28 01:31:39.929101 | orchestrator | | project_id | 4ad906bf0518444bba7667da1a6ac721 | 2026-03-28 01:31:39.929114 | orchestrator | | properties | hostname='test-1' | 2026-03-28 01:31:39.929134 | orchestrator | | security_groups | name='ssh' | 2026-03-28 01:31:39.929155 | orchestrator | | | name='icmp' | 2026-03-28 01:31:39.929167 | orchestrator | | server_groups | None | 2026-03-28 01:31:39.929178 | orchestrator | | status | ACTIVE | 2026-03-28 01:31:39.929189 | orchestrator | | tags | test | 2026-03-28 01:31:39.929206 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:31:39.929217 | orchestrator | | updated | 2026-03-28T01:30:37Z | 2026-03-28 01:31:39.929233 | orchestrator | | user_id | 3aa6d50047b549deb3453cfcc2d40626 | 2026-03-28 01:31:39.929244 | orchestrator | | volumes_attached | delete_on_termination='True', id='cf20315c-1eed-447a-9d9a-55ba59650b51' | 2026-03-28 01:31:39.929255 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:40.284037 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-28 01:31:43.442598 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:43.442720 | orchestrator | | Field | Value | 2026-03-28 01:31:43.442739 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:43.442751 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:31:43.442789 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:31:43.442802 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:31:43.442830 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-28 01:31:43.442868 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:31:43.442881 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:31:43.442912 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:31:43.442925 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:31:43.442936 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:31:43.442947 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:31:43.442966 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:31:43.442978 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:31:43.442989 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:31:43.443000 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:31:43.443017 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:31:43.443029 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:30:05.000000 | 2026-03-28 01:31:43.443048 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:31:43.443060 | orchestrator | | accessIPv4 | | 2026-03-28 01:31:43.443073 | orchestrator | | accessIPv6 | | 2026-03-28 01:31:43.443086 | orchestrator | | addresses | test=192.168.112.200, 192.168.200.232 | 2026-03-28 01:31:43.443106 | orchestrator | | config_drive | | 2026-03-28 01:31:43.443119 | orchestrator | | created | 2026-03-28T01:29:38Z | 2026-03-28 01:31:43.443132 | orchestrator | | description | None | 2026-03-28 01:31:43.443145 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:31:43.443163 | orchestrator | | hostId | 499f4b82ffd5d3958350802fcadafa958e77f15f2867c6ea8398fb69 | 2026-03-28 01:31:43.443177 | orchestrator | | host_status | None | 2026-03-28 01:31:43.443197 | orchestrator | | id | 6f38b9e3-0e68-45b6-9874-1149e648de6f | 2026-03-28 01:31:43.443211 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:31:43.443224 | orchestrator | | key_name | test | 2026-03-28 01:31:43.443244 | orchestrator | | locked | False | 2026-03-28 01:31:43.443257 | orchestrator | | locked_reason | None | 2026-03-28 01:31:43.443270 | orchestrator | | name | test-2 | 2026-03-28 01:31:43.443283 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:31:43.443296 | orchestrator | | progress | 0 | 2026-03-28 01:31:43.443314 | orchestrator | | project_id | 4ad906bf0518444bba7667da1a6ac721 | 2026-03-28 01:31:43.443327 | orchestrator | | properties | hostname='test-2' | 2026-03-28 01:31:43.443347 | orchestrator | | security_groups | name='ssh' | 2026-03-28 01:31:43.443360 | orchestrator | | | name='icmp' | 2026-03-28 01:31:43.443379 | orchestrator | | server_groups | None | 2026-03-28 01:31:43.443392 | orchestrator | | status | ACTIVE | 2026-03-28 01:31:43.443405 | orchestrator | | tags | test | 2026-03-28 01:31:43.443418 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:31:43.443431 | orchestrator | | updated | 2026-03-28T01:30:38Z | 2026-03-28 01:31:43.443444 | orchestrator | | user_id | 3aa6d50047b549deb3453cfcc2d40626 | 2026-03-28 01:31:43.443456 | orchestrator | | volumes_attached | delete_on_termination='True', id='ab5aba66-8694-4039-8b96-3ed2765a52fd' | 2026-03-28 01:31:43.445328 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:43.732126 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-28 01:31:46.684279 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:46.684381 | orchestrator | | Field | Value | 2026-03-28 01:31:46.684392 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:46.684401 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:31:46.684409 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:31:46.684417 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:31:46.684433 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-28 01:31:46.684441 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:31:46.684451 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:31:46.684472 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:31:46.684481 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:31:46.684522 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:31:46.684533 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:31:46.684540 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:31:46.684548 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:31:46.684555 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:31:46.684562 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:31:46.684573 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:31:46.684580 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:30:05.000000 | 2026-03-28 01:31:46.684594 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:31:46.684606 | orchestrator | | accessIPv4 | | 2026-03-28 01:31:46.684614 | orchestrator | | accessIPv6 | | 2026-03-28 01:31:46.684621 | orchestrator | | addresses | test=192.168.112.149, 192.168.200.246 | 2026-03-28 01:31:46.684628 | orchestrator | | config_drive | | 2026-03-28 01:31:46.684636 | orchestrator | | created | 2026-03-28T01:29:40Z | 2026-03-28 01:31:46.684643 | orchestrator | | description | None | 2026-03-28 01:31:46.684650 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:31:46.684661 | orchestrator | | hostId | 499f4b82ffd5d3958350802fcadafa958e77f15f2867c6ea8398fb69 | 2026-03-28 01:31:46.684668 | orchestrator | | host_status | None | 2026-03-28 01:31:46.684685 | orchestrator | | id | 36eb161b-c6a0-4293-80c9-7c0cf5a64214 | 2026-03-28 01:31:46.684693 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:31:46.684700 | orchestrator | | key_name | test | 2026-03-28 01:31:46.684708 | orchestrator | | locked | False | 2026-03-28 01:31:46.684715 | orchestrator | | locked_reason | None | 2026-03-28 01:31:46.684723 | orchestrator | | name | test-3 | 2026-03-28 01:31:46.684730 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:31:46.684737 | orchestrator | | progress | 0 | 2026-03-28 01:31:46.684748 | orchestrator | | project_id | 4ad906bf0518444bba7667da1a6ac721 | 2026-03-28 01:31:46.684760 | orchestrator | | properties | hostname='test-3' | 2026-03-28 01:31:46.684772 | orchestrator | | security_groups | name='ssh' | 2026-03-28 01:31:46.684780 | orchestrator | | | name='icmp' | 2026-03-28 01:31:46.684787 | orchestrator | | server_groups | None | 2026-03-28 01:31:46.684794 | orchestrator | | status | ACTIVE | 2026-03-28 01:31:46.684802 | orchestrator | | tags | test | 2026-03-28 01:31:46.684809 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:31:46.684816 | orchestrator | | updated | 2026-03-28T01:30:39Z | 2026-03-28 01:31:46.684824 | orchestrator | | user_id | 3aa6d50047b549deb3453cfcc2d40626 | 2026-03-28 01:31:46.684842 | orchestrator | | volumes_attached | delete_on_termination='True', id='72e0acee-5e4d-42a1-bf12-35acec1f9ec5' | 2026-03-28 01:31:46.685462 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:47.030758 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-28 01:31:50.024675 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:50.025543 | orchestrator | | Field | Value | 2026-03-28 01:31:50.025572 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:50.025581 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:31:50.025590 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:31:50.025601 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:31:50.025614 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-28 01:31:50.025628 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:31:50.025687 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:31:50.025725 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:31:50.025740 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:31:50.025755 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:31:50.025771 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:31:50.025786 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:31:50.025800 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:31:50.025811 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:31:50.025821 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:31:50.025839 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:31:50.025854 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:30:07.000000 | 2026-03-28 01:31:50.025872 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:31:50.025882 | orchestrator | | accessIPv4 | | 2026-03-28 01:31:50.025891 | orchestrator | | accessIPv6 | | 2026-03-28 01:31:50.025900 | orchestrator | | addresses | test=192.168.112.126, 192.168.200.135 | 2026-03-28 01:31:50.025909 | orchestrator | | config_drive | | 2026-03-28 01:31:50.025919 | orchestrator | | created | 2026-03-28T01:29:40Z | 2026-03-28 01:31:50.025928 | orchestrator | | description | None | 2026-03-28 01:31:50.025942 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:31:50.025952 | orchestrator | | hostId | 0618afabbde7cb21cab3b80da4672605c7721e1acc6d3a45fe1a062c | 2026-03-28 01:31:50.025962 | orchestrator | | host_status | None | 2026-03-28 01:31:50.026553 | orchestrator | | id | 364e37d0-ba2d-415a-a441-6d2fcc50060b | 2026-03-28 01:31:50.026594 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:31:50.026603 | orchestrator | | key_name | test | 2026-03-28 01:31:50.026611 | orchestrator | | locked | False | 2026-03-28 01:31:50.026619 | orchestrator | | locked_reason | None | 2026-03-28 01:31:50.026627 | orchestrator | | name | test-4 | 2026-03-28 01:31:50.026650 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:31:50.026659 | orchestrator | | progress | 0 | 2026-03-28 01:31:50.026667 | orchestrator | | project_id | 4ad906bf0518444bba7667da1a6ac721 | 2026-03-28 01:31:50.026675 | orchestrator | | properties | hostname='test-4' | 2026-03-28 01:31:50.026695 | orchestrator | | security_groups | name='ssh' | 2026-03-28 01:31:50.026704 | orchestrator | | | name='icmp' | 2026-03-28 01:31:50.026712 | orchestrator | | server_groups | None | 2026-03-28 01:31:50.026720 | orchestrator | | status | ACTIVE | 2026-03-28 01:31:50.026728 | orchestrator | | tags | test | 2026-03-28 01:31:50.026736 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:31:50.026757 | orchestrator | | updated | 2026-03-28T01:30:40Z | 2026-03-28 01:31:50.026769 | orchestrator | | user_id | 3aa6d50047b549deb3453cfcc2d40626 | 2026-03-28 01:31:50.026784 | orchestrator | | volumes_attached | delete_on_termination='True', id='e0a2e253-a2c3-4838-8330-1bfc25b0054b' | 2026-03-28 01:31:50.027915 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:31:50.359934 | orchestrator | + server_ping 2026-03-28 01:31:50.360030 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-28 01:31:50.360045 | orchestrator | ++ tr -d '\r' 2026-03-28 01:31:53.482916 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:53.482992 | orchestrator | + ping -c3 192.168.112.126 2026-03-28 01:31:53.501353 | orchestrator | PING 192.168.112.126 (192.168.112.126) 56(84) bytes of data. 2026-03-28 01:31:53.501426 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=1 ttl=63 time=7.43 ms 2026-03-28 01:31:54.498910 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=2 ttl=63 time=3.18 ms 2026-03-28 01:31:55.498867 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=3 ttl=63 time=1.82 ms 2026-03-28 01:31:55.498969 | orchestrator | 2026-03-28 01:31:55.498984 | orchestrator | --- 192.168.112.126 ping statistics --- 2026-03-28 01:31:55.498996 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:55.499006 | orchestrator | rtt min/avg/max/mdev = 1.824/4.142/7.427/2.387 ms 2026-03-28 01:31:55.500100 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:55.500131 | orchestrator | + ping -c3 192.168.112.173 2026-03-28 01:31:55.511227 | orchestrator | PING 192.168.112.173 (192.168.112.173) 56(84) bytes of data. 2026-03-28 01:31:55.511346 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=1 ttl=63 time=6.02 ms 2026-03-28 01:31:56.508612 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=2 ttl=63 time=2.81 ms 2026-03-28 01:31:57.508597 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=3 ttl=63 time=2.04 ms 2026-03-28 01:31:57.508723 | orchestrator | 2026-03-28 01:31:57.508751 | orchestrator | --- 192.168.112.173 ping statistics --- 2026-03-28 01:31:57.508774 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2026-03-28 01:31:57.508786 | orchestrator | rtt min/avg/max/mdev = 2.044/3.624/6.020/1.722 ms 2026-03-28 01:31:57.509080 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:57.509134 | orchestrator | + ping -c3 192.168.112.200 2026-03-28 01:31:57.520602 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2026-03-28 01:31:57.520667 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=6.71 ms 2026-03-28 01:31:58.519016 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.67 ms 2026-03-28 01:31:59.520840 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=2.23 ms 2026-03-28 01:31:59.520962 | orchestrator | 2026-03-28 01:31:59.520989 | orchestrator | --- 192.168.112.200 ping statistics --- 2026-03-28 01:31:59.521012 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-28 01:31:59.521033 | orchestrator | rtt min/avg/max/mdev = 2.234/3.871/6.706/2.012 ms 2026-03-28 01:31:59.521067 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:59.521090 | orchestrator | + ping -c3 192.168.112.131 2026-03-28 01:31:59.530606 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-03-28 01:31:59.530711 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=6.17 ms 2026-03-28 01:32:00.527763 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=1.75 ms 2026-03-28 01:32:01.530403 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=2.08 ms 2026-03-28 01:32:01.530557 | orchestrator | 2026-03-28 01:32:01.530577 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-03-28 01:32:01.530590 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:32:01.530603 | orchestrator | rtt min/avg/max/mdev = 1.749/3.332/6.166/2.008 ms 2026-03-28 01:32:01.531259 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:32:01.531358 | orchestrator | + ping -c3 192.168.112.149 2026-03-28 01:32:01.544889 | orchestrator | PING 192.168.112.149 (192.168.112.149) 56(84) bytes of data. 2026-03-28 01:32:01.544990 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=1 ttl=63 time=9.67 ms 2026-03-28 01:32:02.540618 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=2 ttl=63 time=3.23 ms 2026-03-28 01:32:03.541459 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=3 ttl=63 time=2.51 ms 2026-03-28 01:32:03.541699 | orchestrator | 2026-03-28 01:32:03.541752 | orchestrator | --- 192.168.112.149 ping statistics --- 2026-03-28 01:32:03.541766 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-28 01:32:03.541777 | orchestrator | rtt min/avg/max/mdev = 2.514/5.138/9.674/3.220 ms 2026-03-28 01:32:03.542159 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-28 01:32:03.743842 | orchestrator | ok: Runtime: 0:08:30.904560 2026-03-28 01:32:03.796762 | 2026-03-28 01:32:03.796902 | TASK [Run tempest] 2026-03-28 01:32:04.506873 | orchestrator | + set -e 2026-03-28 01:32:04.507056 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 01:32:04.507078 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 01:32:04.507088 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 01:32:04.507097 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 01:32:04.507107 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 01:32:04.507117 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 01:32:04.507149 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 01:32:04.507166 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 01:32:04.507180 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 01:32:04.507189 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 01:32:04.507202 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 01:32:04.507210 | orchestrator | ++ export ARA=false 2026-03-28 01:32:04.507218 | orchestrator | ++ ARA=false 2026-03-28 01:32:04.507228 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 01:32:04.507235 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 01:32:04.507242 | orchestrator | ++ export TEMPEST=true 2026-03-28 01:32:04.507252 | orchestrator | ++ TEMPEST=true 2026-03-28 01:32:04.507260 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 01:32:04.507267 | orchestrator | ++ IS_ZUUL=true 2026-03-28 01:32:04.507275 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.253 2026-03-28 01:32:04.507283 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.253 2026-03-28 01:32:04.507290 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 01:32:04.507312 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 01:32:04.507320 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 01:32:04.507327 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 01:32:04.507343 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 01:32:04.507351 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 01:32:04.507369 | orchestrator | 2026-03-28 01:32:04.507378 | orchestrator | # Tempest 2026-03-28 01:32:04.507385 | orchestrator | 2026-03-28 01:32:04.507392 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 01:32:04.507400 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 01:32:04.507407 | orchestrator | + echo 2026-03-28 01:32:04.507415 | orchestrator | + echo '# Tempest' 2026-03-28 01:32:04.507423 | orchestrator | + echo 2026-03-28 01:32:04.507430 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-03-28 01:32:04.507437 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-03-28 01:32:16.742666 | orchestrator | 2026-03-28 01:32:16 | INFO  | Task 492d516d-1087-4e0c-9de5-c6f395288df8 (tempest) was prepared for execution. 2026-03-28 01:32:16.742821 | orchestrator | 2026-03-28 01:32:16 | INFO  | It takes a moment until task 492d516d-1087-4e0c-9de5-c6f395288df8 (tempest) has been started and output is visible here. 2026-03-28 01:33:41.060349 | orchestrator | 2026-03-28 01:33:41.060576 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-03-28 01:33:41.060627 | orchestrator | 2026-03-28 01:33:41.060643 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-03-28 01:33:41.060659 | orchestrator | Saturday 28 March 2026 01:32:21 +0000 (0:00:00.257) 0:00:00.257 ******** 2026-03-28 01:33:41.060670 | orchestrator | changed: [testbed-manager] 2026-03-28 01:33:41.060691 | orchestrator | 2026-03-28 01:33:41.060718 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-03-28 01:33:41.060742 | orchestrator | Saturday 28 March 2026 01:32:22 +0000 (0:00:00.833) 0:00:01.090 ******** 2026-03-28 01:33:41.060759 | orchestrator | changed: [testbed-manager] 2026-03-28 01:33:41.060776 | orchestrator | 2026-03-28 01:33:41.060794 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-03-28 01:33:41.060813 | orchestrator | Saturday 28 March 2026 01:32:23 +0000 (0:00:01.328) 0:00:02.418 ******** 2026-03-28 01:33:41.060833 | orchestrator | ok: [testbed-manager] 2026-03-28 01:33:41.060852 | orchestrator | 2026-03-28 01:33:41.060871 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-03-28 01:33:41.060889 | orchestrator | Saturday 28 March 2026 01:32:24 +0000 (0:00:00.490) 0:00:02.908 ******** 2026-03-28 01:33:41.060908 | orchestrator | changed: [testbed-manager] 2026-03-28 01:33:41.060927 | orchestrator | 2026-03-28 01:33:41.060946 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-03-28 01:33:41.060959 | orchestrator | Saturday 28 March 2026 01:32:47 +0000 (0:00:23.410) 0:00:26.319 ******** 2026-03-28 01:33:41.060970 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-03-28 01:33:41.061016 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-03-28 01:33:41.061028 | orchestrator | 2026-03-28 01:33:41.061044 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-03-28 01:33:41.061056 | orchestrator | Saturday 28 March 2026 01:32:56 +0000 (0:00:08.735) 0:00:35.054 ******** 2026-03-28 01:33:41.061067 | orchestrator | ok: [testbed-manager] => { 2026-03-28 01:33:41.061077 | orchestrator |  "changed": false, 2026-03-28 01:33:41.061088 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:33:41.061099 | orchestrator | } 2026-03-28 01:33:41.061110 | orchestrator | 2026-03-28 01:33:41.061121 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-03-28 01:33:41.061132 | orchestrator | Saturday 28 March 2026 01:32:56 +0000 (0:00:00.170) 0:00:35.225 ******** 2026-03-28 01:33:41.061143 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:33:41.061154 | orchestrator | 2026-03-28 01:33:41.061164 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-03-28 01:33:41.061176 | orchestrator | Saturday 28 March 2026 01:33:00 +0000 (0:00:03.656) 0:00:38.881 ******** 2026-03-28 01:33:41.061188 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:33:41.061208 | orchestrator | 2026-03-28 01:33:41.061225 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-03-28 01:33:41.061243 | orchestrator | Saturday 28 March 2026 01:33:02 +0000 (0:00:01.891) 0:00:40.773 ******** 2026-03-28 01:33:41.061261 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:33:41.061277 | orchestrator | 2026-03-28 01:33:41.061293 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-03-28 01:33:41.061310 | orchestrator | Saturday 28 March 2026 01:33:05 +0000 (0:00:03.857) 0:00:44.630 ******** 2026-03-28 01:33:41.061327 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:33:41.061345 | orchestrator | 2026-03-28 01:33:41.061364 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-03-28 01:33:41.061383 | orchestrator | Saturday 28 March 2026 01:33:06 +0000 (0:00:00.201) 0:00:44.832 ******** 2026-03-28 01:33:41.061402 | orchestrator | changed: [testbed-manager] 2026-03-28 01:33:41.061421 | orchestrator | 2026-03-28 01:33:41.061439 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-03-28 01:33:41.061495 | orchestrator | Saturday 28 March 2026 01:33:09 +0000 (0:00:03.093) 0:00:47.926 ******** 2026-03-28 01:33:41.061507 | orchestrator | changed: [testbed-manager] 2026-03-28 01:33:41.061518 | orchestrator | 2026-03-28 01:33:41.061528 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-03-28 01:33:41.061539 | orchestrator | Saturday 28 March 2026 01:33:20 +0000 (0:00:11.254) 0:00:59.180 ******** 2026-03-28 01:33:41.061550 | orchestrator | changed: [testbed-manager] 2026-03-28 01:33:41.061561 | orchestrator | 2026-03-28 01:33:41.061572 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-03-28 01:33:41.061583 | orchestrator | Saturday 28 March 2026 01:33:21 +0000 (0:00:00.882) 0:01:00.063 ******** 2026-03-28 01:33:41.061594 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:33:41.061605 | orchestrator | 2026-03-28 01:33:41.061615 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-03-28 01:33:41.061626 | orchestrator | Saturday 28 March 2026 01:33:23 +0000 (0:00:01.690) 0:01:01.753 ******** 2026-03-28 01:33:41.061637 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:33:41.061648 | orchestrator | 2026-03-28 01:33:41.061658 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-03-28 01:33:41.061669 | orchestrator | Saturday 28 March 2026 01:33:24 +0000 (0:00:01.570) 0:01:03.324 ******** 2026-03-28 01:33:41.061691 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:33:41.061702 | orchestrator | 2026-03-28 01:33:41.061713 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-03-28 01:33:41.061724 | orchestrator | Saturday 28 March 2026 01:33:24 +0000 (0:00:00.220) 0:01:03.545 ******** 2026-03-28 01:33:41.061747 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:33:41.061758 | orchestrator | 2026-03-28 01:33:41.061769 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-03-28 01:33:41.061780 | orchestrator | Saturday 28 March 2026 01:33:25 +0000 (0:00:00.188) 0:01:03.734 ******** 2026-03-28 01:33:41.061790 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:33:41.061801 | orchestrator | 2026-03-28 01:33:41.061812 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-03-28 01:33:41.061848 | orchestrator | Saturday 28 March 2026 01:33:29 +0000 (0:00:03.958) 0:01:07.692 ******** 2026-03-28 01:33:41.061860 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-03-28 01:33:41.061871 | orchestrator |  "changed": false, 2026-03-28 01:33:41.061882 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:33:41.061893 | orchestrator | } 2026-03-28 01:33:41.061904 | orchestrator | 2026-03-28 01:33:41.061915 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-03-28 01:33:41.061927 | orchestrator | Saturday 28 March 2026 01:33:29 +0000 (0:00:00.187) 0:01:07.879 ******** 2026-03-28 01:33:41.061938 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-03-28 01:33:41.061950 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-03-28 01:33:41.061960 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:33:41.061971 | orchestrator | 2026-03-28 01:33:41.061982 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-03-28 01:33:41.061992 | orchestrator | Saturday 28 March 2026 01:33:29 +0000 (0:00:00.444) 0:01:08.324 ******** 2026-03-28 01:33:41.062003 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:33:41.062082 | orchestrator | 2026-03-28 01:33:41.062099 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-03-28 01:33:41.062110 | orchestrator | Saturday 28 March 2026 01:33:29 +0000 (0:00:00.144) 0:01:08.468 ******** 2026-03-28 01:33:41.062121 | orchestrator | ok: [testbed-manager] 2026-03-28 01:33:41.062132 | orchestrator | 2026-03-28 01:33:41.062143 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-03-28 01:33:41.062153 | orchestrator | Saturday 28 March 2026 01:33:30 +0000 (0:00:00.528) 0:01:08.996 ******** 2026-03-28 01:33:41.062164 | orchestrator | changed: [testbed-manager] 2026-03-28 01:33:41.062175 | orchestrator | 2026-03-28 01:33:41.062186 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-03-28 01:33:41.062197 | orchestrator | Saturday 28 March 2026 01:33:31 +0000 (0:00:00.903) 0:01:09.900 ******** 2026-03-28 01:33:41.062207 | orchestrator | ok: [testbed-manager] 2026-03-28 01:33:41.062218 | orchestrator | 2026-03-28 01:33:41.062229 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-03-28 01:33:41.062240 | orchestrator | Saturday 28 March 2026 01:33:31 +0000 (0:00:00.465) 0:01:10.365 ******** 2026-03-28 01:33:41.062251 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:33:41.062262 | orchestrator | 2026-03-28 01:33:41.062272 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-03-28 01:33:41.062283 | orchestrator | Saturday 28 March 2026 01:33:31 +0000 (0:00:00.148) 0:01:10.514 ******** 2026-03-28 01:33:41.062294 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-03-28 01:33:41.062308 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-03-28 01:33:41.062327 | orchestrator | 2026-03-28 01:33:41.062345 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-03-28 01:33:41.062364 | orchestrator | Saturday 28 March 2026 01:33:39 +0000 (0:00:08.130) 0:01:18.644 ******** 2026-03-28 01:33:41.062382 | orchestrator | changed: [testbed-manager] 2026-03-28 01:33:41.062400 | orchestrator | 2026-03-28 01:33:41.062420 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:33:41.062477 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 01:33:41.062499 | orchestrator | 2026-03-28 01:33:41.062518 | orchestrator | 2026-03-28 01:33:41.062538 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:33:41.062555 | orchestrator | Saturday 28 March 2026 01:33:41 +0000 (0:00:01.052) 0:01:19.697 ******** 2026-03-28 01:33:41.062566 | orchestrator | =============================================================================== 2026-03-28 01:33:41.062577 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 23.41s 2026-03-28 01:33:41.062588 | orchestrator | osism.validations.tempest : Install qemu-utils package ----------------- 11.25s 2026-03-28 01:33:41.062598 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.74s 2026-03-28 01:33:41.062609 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 8.13s 2026-03-28 01:33:41.062620 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.96s 2026-03-28 01:33:41.062630 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.86s 2026-03-28 01:33:41.062641 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.66s 2026-03-28 01:33:41.062652 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 3.09s 2026-03-28 01:33:41.062663 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.89s 2026-03-28 01:33:41.062673 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.69s 2026-03-28 01:33:41.062692 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.57s 2026-03-28 01:33:41.062703 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.33s 2026-03-28 01:33:41.062714 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.05s 2026-03-28 01:33:41.062725 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.90s 2026-03-28 01:33:41.062735 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.88s 2026-03-28 01:33:41.062746 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 0.83s 2026-03-28 01:33:41.062761 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.53s 2026-03-28 01:33:41.062793 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.49s 2026-03-28 01:33:41.503527 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.47s 2026-03-28 01:33:41.503645 | orchestrator | osism.validations.tempest : Resolve flavor IDs -------------------------- 0.44s 2026-03-28 01:33:41.859598 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-03-28 01:33:41.862152 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-03-28 01:33:41.864785 | orchestrator | 2026-03-28 01:33:41.864836 | orchestrator | ## IDENTITY (API) 2026-03-28 01:33:41.864850 | orchestrator | 2026-03-28 01:33:41.864860 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-28 01:33:41.864870 | orchestrator | + echo 2026-03-28 01:33:41.864880 | orchestrator | + echo '## IDENTITY (API)' 2026-03-28 01:33:41.864890 | orchestrator | + echo 2026-03-28 01:33:41.864901 | orchestrator | + _tempest tempest.api.identity.v3 2026-03-28 01:33:41.864911 | orchestrator | + local regex=tempest.api.identity.v3 2026-03-28 01:33:41.864943 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-03-28 01:33:41.865050 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:33:41.869119 | orchestrator | + tee -a /opt/tempest/20260328-0133.log 2026-03-28 01:33:45.916622 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:33:45.916732 | orchestrator | Did you mean one of these? 2026-03-28 01:33:45.916782 | orchestrator | help 2026-03-28 01:33:45.916795 | orchestrator | init 2026-03-28 01:33:46.385293 | orchestrator | 2026-03-28 01:33:46.385421 | orchestrator | ## IMAGE (API) 2026-03-28 01:33:46.385488 | orchestrator | 2026-03-28 01:33:46.385506 | orchestrator | + echo 2026-03-28 01:33:46.385522 | orchestrator | + echo '## IMAGE (API)' 2026-03-28 01:33:46.385539 | orchestrator | + echo 2026-03-28 01:33:46.385555 | orchestrator | + _tempest tempest.api.image.v2 2026-03-28 01:33:46.385571 | orchestrator | + local regex=tempest.api.image.v2 2026-03-28 01:33:46.385605 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-03-28 01:33:46.386855 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:33:46.389176 | orchestrator | + tee -a /opt/tempest/20260328-0133.log 2026-03-28 01:33:50.314342 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:33:50.314474 | orchestrator | Did you mean one of these? 2026-03-28 01:33:50.314492 | orchestrator | help 2026-03-28 01:33:50.314501 | orchestrator | init 2026-03-28 01:33:50.824618 | orchestrator | 2026-03-28 01:33:50.824735 | orchestrator | ## NETWORK (API) 2026-03-28 01:33:50.824756 | orchestrator | 2026-03-28 01:33:50.824769 | orchestrator | + echo 2026-03-28 01:33:50.824781 | orchestrator | + echo '## NETWORK (API)' 2026-03-28 01:33:50.824796 | orchestrator | + echo 2026-03-28 01:33:50.824810 | orchestrator | + _tempest tempest.api.network 2026-03-28 01:33:50.824825 | orchestrator | + local regex=tempest.api.network 2026-03-28 01:33:50.825196 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-03-28 01:33:50.827389 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:33:50.831347 | orchestrator | + tee -a /opt/tempest/20260328-0133.log 2026-03-28 01:33:54.671292 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:33:54.671387 | orchestrator | Did you mean one of these? 2026-03-28 01:33:54.671401 | orchestrator | help 2026-03-28 01:33:54.671411 | orchestrator | init 2026-03-28 01:33:55.103815 | orchestrator | 2026-03-28 01:33:55.103917 | orchestrator | ## VOLUME (API) 2026-03-28 01:33:55.103976 | orchestrator | 2026-03-28 01:33:55.104002 | orchestrator | + echo 2026-03-28 01:33:55.104021 | orchestrator | + echo '## VOLUME (API)' 2026-03-28 01:33:55.104041 | orchestrator | + echo 2026-03-28 01:33:55.104058 | orchestrator | + _tempest tempest.api.volume 2026-03-28 01:33:55.104076 | orchestrator | + local regex=tempest.api.volume 2026-03-28 01:33:55.104541 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-03-28 01:33:55.105139 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:33:55.110947 | orchestrator | + tee -a /opt/tempest/20260328-0133.log 2026-03-28 01:33:59.027354 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:33:59.027528 | orchestrator | Did you mean one of these? 2026-03-28 01:33:59.027553 | orchestrator | help 2026-03-28 01:33:59.027568 | orchestrator | init 2026-03-28 01:33:59.452561 | orchestrator | 2026-03-28 01:33:59.452665 | orchestrator | ## COMPUTE (API) 2026-03-28 01:33:59.452681 | orchestrator | 2026-03-28 01:33:59.452699 | orchestrator | + echo 2026-03-28 01:33:59.452711 | orchestrator | + echo '## COMPUTE (API)' 2026-03-28 01:33:59.452723 | orchestrator | + echo 2026-03-28 01:33:59.452735 | orchestrator | + _tempest tempest.api.compute 2026-03-28 01:33:59.452746 | orchestrator | + local regex=tempest.api.compute 2026-03-28 01:33:59.453625 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-03-28 01:33:59.454674 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:33:59.460648 | orchestrator | + tee -a /opt/tempest/20260328-0133.log 2026-03-28 01:34:03.295133 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:34:03.295257 | orchestrator | Did you mean one of these? 2026-03-28 01:34:03.295281 | orchestrator | help 2026-03-28 01:34:03.295295 | orchestrator | init 2026-03-28 01:34:03.727181 | orchestrator | 2026-03-28 01:34:03.727304 | orchestrator | ## DNS (API) 2026-03-28 01:34:03.727329 | orchestrator | 2026-03-28 01:34:03.727349 | orchestrator | + echo 2026-03-28 01:34:03.727367 | orchestrator | + echo '## DNS (API)' 2026-03-28 01:34:03.727384 | orchestrator | + echo 2026-03-28 01:34:03.727396 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-03-28 01:34:03.727408 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-03-28 01:34:03.727786 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-03-28 01:34:03.728550 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:34:03.733061 | orchestrator | + tee -a /opt/tempest/20260328-0134.log 2026-03-28 01:34:07.500830 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:34:07.500932 | orchestrator | Did you mean one of these? 2026-03-28 01:34:07.500943 | orchestrator | help 2026-03-28 01:34:07.500951 | orchestrator | init 2026-03-28 01:34:07.933251 | orchestrator | 2026-03-28 01:34:07.933330 | orchestrator | ## OBJECT-STORE (API) 2026-03-28 01:34:07.933340 | orchestrator | 2026-03-28 01:34:07.933347 | orchestrator | + echo 2026-03-28 01:34:07.933354 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-03-28 01:34:07.933360 | orchestrator | + echo 2026-03-28 01:34:07.933367 | orchestrator | + _tempest tempest.api.object_storage 2026-03-28 01:34:07.933375 | orchestrator | + local regex=tempest.api.object_storage 2026-03-28 01:34:07.933981 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-03-28 01:34:07.934533 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:34:07.936658 | orchestrator | + tee -a /opt/tempest/20260328-0134.log 2026-03-28 01:34:11.689279 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:34:11.689507 | orchestrator | Did you mean one of these? 2026-03-28 01:34:11.689526 | orchestrator | help 2026-03-28 01:34:11.689538 | orchestrator | init 2026-03-28 01:34:12.435087 | orchestrator | ok: Runtime: 0:02:08.009233 2026-03-28 01:34:12.455725 | 2026-03-28 01:34:12.455879 | TASK [Check prometheus alert status] 2026-03-28 01:34:12.990201 | orchestrator | skipping: Conditional result was False 2026-03-28 01:34:12.994331 | 2026-03-28 01:34:12.994617 | PLAY RECAP 2026-03-28 01:34:12.994756 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-03-28 01:34:12.994823 | 2026-03-28 01:34:13.225566 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-28 01:34:13.226951 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-28 01:34:14.035962 | 2026-03-28 01:34:14.036160 | PLAY [Post output play] 2026-03-28 01:34:14.053301 | 2026-03-28 01:34:14.053459 | LOOP [stage-output : Register sources] 2026-03-28 01:34:14.127636 | 2026-03-28 01:34:14.128104 | TASK [stage-output : Check sudo] 2026-03-28 01:34:15.011775 | orchestrator | sudo: a password is required 2026-03-28 01:34:15.170501 | orchestrator | ok: Runtime: 0:00:00.016887 2026-03-28 01:34:15.182552 | 2026-03-28 01:34:15.182714 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-28 01:34:15.234116 | 2026-03-28 01:34:15.234439 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-28 01:34:15.314354 | orchestrator | ok 2026-03-28 01:34:15.323177 | 2026-03-28 01:34:15.323317 | LOOP [stage-output : Ensure target folders exist] 2026-03-28 01:34:15.808387 | orchestrator | ok: "docs" 2026-03-28 01:34:15.808782 | 2026-03-28 01:34:16.062378 | orchestrator | ok: "artifacts" 2026-03-28 01:34:16.312078 | orchestrator | ok: "logs" 2026-03-28 01:34:16.335879 | 2026-03-28 01:34:16.336064 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-28 01:34:16.372339 | 2026-03-28 01:34:16.372710 | TASK [stage-output : Make all log files readable] 2026-03-28 01:34:16.668969 | orchestrator | ok 2026-03-28 01:34:16.678023 | 2026-03-28 01:34:16.678167 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-28 01:34:16.714007 | orchestrator | skipping: Conditional result was False 2026-03-28 01:34:16.728637 | 2026-03-28 01:34:16.728828 | TASK [stage-output : Discover log files for compression] 2026-03-28 01:34:16.753635 | orchestrator | skipping: Conditional result was False 2026-03-28 01:34:16.765370 | 2026-03-28 01:34:16.765548 | LOOP [stage-output : Archive everything from logs] 2026-03-28 01:34:16.808613 | 2026-03-28 01:34:16.808814 | PLAY [Post cleanup play] 2026-03-28 01:34:16.817653 | 2026-03-28 01:34:16.817769 | TASK [Set cloud fact (Zuul deployment)] 2026-03-28 01:34:16.883209 | orchestrator | ok 2026-03-28 01:34:16.894191 | 2026-03-28 01:34:16.894331 | TASK [Set cloud fact (local deployment)] 2026-03-28 01:34:16.928729 | orchestrator | skipping: Conditional result was False 2026-03-28 01:34:16.942619 | 2026-03-28 01:34:16.942800 | TASK [Clean the cloud environment] 2026-03-28 01:34:17.572961 | orchestrator | 2026-03-28 01:34:17 - clean up servers 2026-03-28 01:34:18.352494 | orchestrator | 2026-03-28 01:34:18 - testbed-manager 2026-03-28 01:34:18.434323 | orchestrator | 2026-03-28 01:34:18 - testbed-node-4 2026-03-28 01:34:18.529919 | orchestrator | 2026-03-28 01:34:18 - testbed-node-1 2026-03-28 01:34:18.616356 | orchestrator | 2026-03-28 01:34:18 - testbed-node-3 2026-03-28 01:34:18.699152 | orchestrator | 2026-03-28 01:34:18 - testbed-node-5 2026-03-28 01:34:18.788867 | orchestrator | 2026-03-28 01:34:18 - testbed-node-2 2026-03-28 01:34:18.892131 | orchestrator | 2026-03-28 01:34:18 - testbed-node-0 2026-03-28 01:34:18.975218 | orchestrator | 2026-03-28 01:34:18 - clean up keypairs 2026-03-28 01:34:18.990171 | orchestrator | 2026-03-28 01:34:18 - testbed 2026-03-28 01:34:19.013924 | orchestrator | 2026-03-28 01:34:19 - wait for servers to be gone 2026-03-28 01:34:29.971222 | orchestrator | 2026-03-28 01:34:29 - clean up ports 2026-03-28 01:34:30.182743 | orchestrator | 2026-03-28 01:34:30 - 55ceaee4-f33a-4c40-aa4c-8d37c3dc24c9 2026-03-28 01:34:30.440108 | orchestrator | 2026-03-28 01:34:30 - 6b43cc6c-68f6-492e-9147-a75447a41c07 2026-03-28 01:34:30.913003 | orchestrator | 2026-03-28 01:34:30 - 83e07588-365d-4f94-ac70-94406327f6be 2026-03-28 01:34:31.209549 | orchestrator | 2026-03-28 01:34:31 - 84455afc-957d-40d2-bf5d-07f5369f5eab 2026-03-28 01:34:31.513348 | orchestrator | 2026-03-28 01:34:31 - 95a2ea38-57cc-4c8e-8ad2-060d76162ec1 2026-03-28 01:34:31.741879 | orchestrator | 2026-03-28 01:34:31 - cf2f0bf1-072c-4e50-991f-dd07da424770 2026-03-28 01:34:31.964541 | orchestrator | 2026-03-28 01:34:31 - e1cb5892-15f0-40b4-9dca-a52d528fdffb 2026-03-28 01:34:32.169739 | orchestrator | 2026-03-28 01:34:32 - clean up volumes 2026-03-28 01:34:32.313895 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-3-node-base 2026-03-28 01:34:32.368054 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-manager-base 2026-03-28 01:34:32.414722 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-1-node-base 2026-03-28 01:34:32.462516 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-0-node-base 2026-03-28 01:34:32.503817 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-4-node-base 2026-03-28 01:34:32.541437 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-2-node-base 2026-03-28 01:34:32.583923 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-1-node-4 2026-03-28 01:34:32.626656 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-5-node-5 2026-03-28 01:34:32.670790 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-6-node-3 2026-03-28 01:34:32.721738 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-0-node-3 2026-03-28 01:34:32.766064 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-8-node-5 2026-03-28 01:34:32.808153 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-3-node-3 2026-03-28 01:34:32.851656 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-7-node-4 2026-03-28 01:34:32.903199 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-5-node-base 2026-03-28 01:34:32.946197 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-4-node-4 2026-03-28 01:34:32.989405 | orchestrator | 2026-03-28 01:34:32 - testbed-volume-2-node-5 2026-03-28 01:34:33.032714 | orchestrator | 2026-03-28 01:34:33 - disconnect routers 2026-03-28 01:34:33.167205 | orchestrator | 2026-03-28 01:34:33 - testbed 2026-03-28 01:34:34.163395 | orchestrator | 2026-03-28 01:34:34 - clean up subnets 2026-03-28 01:34:34.222775 | orchestrator | 2026-03-28 01:34:34 - subnet-testbed-management 2026-03-28 01:34:34.377293 | orchestrator | 2026-03-28 01:34:34 - clean up networks 2026-03-28 01:34:34.539342 | orchestrator | 2026-03-28 01:34:34 - net-testbed-management 2026-03-28 01:34:34.834755 | orchestrator | 2026-03-28 01:34:34 - clean up security groups 2026-03-28 01:34:34.874768 | orchestrator | 2026-03-28 01:34:34 - testbed-management 2026-03-28 01:34:35.006734 | orchestrator | 2026-03-28 01:34:35 - testbed-node 2026-03-28 01:34:35.140667 | orchestrator | 2026-03-28 01:34:35 - clean up floating ips 2026-03-28 01:34:35.171683 | orchestrator | 2026-03-28 01:34:35 - 81.163.192.253 2026-03-28 01:34:35.530816 | orchestrator | 2026-03-28 01:34:35 - clean up routers 2026-03-28 01:34:35.629975 | orchestrator | 2026-03-28 01:34:35 - testbed 2026-03-28 01:34:37.506767 | orchestrator | ok: Runtime: 0:00:19.655682 2026-03-28 01:34:37.509528 | 2026-03-28 01:34:37.509659 | PLAY RECAP 2026-03-28 01:34:37.509740 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-28 01:34:37.509779 | 2026-03-28 01:34:37.644141 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-28 01:34:37.645909 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-28 01:34:38.402338 | 2026-03-28 01:34:38.402511 | PLAY [Cleanup play] 2026-03-28 01:34:38.419159 | 2026-03-28 01:34:38.419314 | TASK [Set cloud fact (Zuul deployment)] 2026-03-28 01:34:38.488881 | orchestrator | ok 2026-03-28 01:34:38.498989 | 2026-03-28 01:34:38.499178 | TASK [Set cloud fact (local deployment)] 2026-03-28 01:34:38.545106 | orchestrator | skipping: Conditional result was False 2026-03-28 01:34:38.560219 | 2026-03-28 01:34:38.560373 | TASK [Clean the cloud environment] 2026-03-28 01:34:39.820474 | orchestrator | 2026-03-28 01:34:39 - clean up servers 2026-03-28 01:34:40.307362 | orchestrator | 2026-03-28 01:34:40 - clean up keypairs 2026-03-28 01:34:40.323032 | orchestrator | 2026-03-28 01:34:40 - wait for servers to be gone 2026-03-28 01:34:40.368329 | orchestrator | 2026-03-28 01:34:40 - clean up ports 2026-03-28 01:34:40.479671 | orchestrator | 2026-03-28 01:34:40 - clean up volumes 2026-03-28 01:34:40.559608 | orchestrator | 2026-03-28 01:34:40 - disconnect routers 2026-03-28 01:34:40.591793 | orchestrator | 2026-03-28 01:34:40 - clean up subnets 2026-03-28 01:34:40.614738 | orchestrator | 2026-03-28 01:34:40 - clean up networks 2026-03-28 01:34:40.777012 | orchestrator | 2026-03-28 01:34:40 - clean up security groups 2026-03-28 01:34:40.820579 | orchestrator | 2026-03-28 01:34:40 - clean up floating ips 2026-03-28 01:34:40.849407 | orchestrator | 2026-03-28 01:34:40 - clean up routers 2026-03-28 01:34:41.098032 | orchestrator | ok: Runtime: 0:00:01.482841 2026-03-28 01:34:41.100771 | 2026-03-28 01:34:41.100889 | PLAY RECAP 2026-03-28 01:34:41.100970 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-28 01:34:41.101009 | 2026-03-28 01:34:41.236224 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-28 01:34:41.237394 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-28 01:34:41.980206 | 2026-03-28 01:34:41.980388 | PLAY [Base post-fetch] 2026-03-28 01:34:41.996847 | 2026-03-28 01:34:41.997002 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-28 01:34:42.053624 | orchestrator | skipping: Conditional result was False 2026-03-28 01:34:42.069012 | 2026-03-28 01:34:42.069237 | TASK [fetch-output : Set log path for single node] 2026-03-28 01:34:42.130606 | orchestrator | ok 2026-03-28 01:34:42.145642 | 2026-03-28 01:34:42.145865 | LOOP [fetch-output : Ensure local output dirs] 2026-03-28 01:34:42.706218 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/e15732348dc84737bc9145d0d2f89ba4/work/logs" 2026-03-28 01:34:42.977496 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e15732348dc84737bc9145d0d2f89ba4/work/artifacts" 2026-03-28 01:34:43.234953 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e15732348dc84737bc9145d0d2f89ba4/work/docs" 2026-03-28 01:34:43.258804 | 2026-03-28 01:34:43.258992 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-28 01:34:44.282480 | orchestrator | changed: .d..t...... ./ 2026-03-28 01:34:44.282805 | orchestrator | changed: All items complete 2026-03-28 01:34:44.282881 | 2026-03-28 01:34:45.023847 | orchestrator | changed: .d..t...... ./ 2026-03-28 01:34:45.841863 | orchestrator | changed: .d..t...... ./ 2026-03-28 01:34:45.867498 | 2026-03-28 01:34:45.867670 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-28 01:34:45.904381 | orchestrator | skipping: Conditional result was False 2026-03-28 01:34:45.907690 | orchestrator | skipping: Conditional result was False 2026-03-28 01:34:45.932885 | 2026-03-28 01:34:45.933040 | PLAY RECAP 2026-03-28 01:34:45.933125 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-28 01:34:45.933169 | 2026-03-28 01:34:46.070735 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-28 01:34:46.072079 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-28 01:34:46.815700 | 2026-03-28 01:34:46.815879 | PLAY [Base post] 2026-03-28 01:34:46.830799 | 2026-03-28 01:34:46.830972 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-28 01:34:47.860368 | orchestrator | changed 2026-03-28 01:34:47.870578 | 2026-03-28 01:34:47.870710 | PLAY RECAP 2026-03-28 01:34:47.870787 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-28 01:34:47.870893 | 2026-03-28 01:34:47.992718 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-28 01:34:47.995395 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-28 01:34:48.792737 | 2026-03-28 01:34:48.792913 | PLAY [Base post-logs] 2026-03-28 01:34:48.804340 | 2026-03-28 01:34:48.804484 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-28 01:34:49.288766 | localhost | changed 2026-03-28 01:34:49.299756 | 2026-03-28 01:34:49.299906 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-28 01:34:49.336454 | localhost | ok 2026-03-28 01:34:49.340666 | 2026-03-28 01:34:49.340782 | TASK [Set zuul-log-path fact] 2026-03-28 01:34:49.357390 | localhost | ok 2026-03-28 01:34:49.366701 | 2026-03-28 01:34:49.366816 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-28 01:34:49.405605 | localhost | ok 2026-03-28 01:34:49.412428 | 2026-03-28 01:34:49.412737 | TASK [upload-logs : Create log directories] 2026-03-28 01:34:49.932616 | localhost | changed 2026-03-28 01:34:49.935494 | 2026-03-28 01:34:49.935627 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-28 01:34:50.448094 | localhost -> localhost | ok: Runtime: 0:00:00.008037 2026-03-28 01:34:50.458176 | 2026-03-28 01:34:50.458418 | TASK [upload-logs : Upload logs to log server] 2026-03-28 01:34:51.050717 | localhost | Output suppressed because no_log was given 2026-03-28 01:34:51.053223 | 2026-03-28 01:34:51.054181 | LOOP [upload-logs : Compress console log and json output] 2026-03-28 01:34:51.119811 | localhost | skipping: Conditional result was False 2026-03-28 01:34:51.125818 | localhost | skipping: Conditional result was False 2026-03-28 01:34:51.137706 | 2026-03-28 01:34:51.137966 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-28 01:34:51.191776 | localhost | skipping: Conditional result was False 2026-03-28 01:34:51.192100 | 2026-03-28 01:34:51.197988 | localhost | skipping: Conditional result was False 2026-03-28 01:34:51.205787 | 2026-03-28 01:34:51.205950 | LOOP [upload-logs : Upload console log and json output]